00:00:00.000 Started by upstream project "autotest-per-patch" build number 131184 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.102 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.103 The recommended git tool is: git 00:00:00.103 using credential 00000000-0000-0000-0000-000000000002 00:00:00.104 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.177 Fetching changes from the remote Git repository 00:00:00.180 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.251 Using shallow fetch with depth 1 00:00:00.251 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.251 > git --version # timeout=10 00:00:00.308 > git --version # 'git version 2.39.2' 00:00:00.308 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.357 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.357 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.396 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.411 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.424 Checking out Revision 3f5fbcceba25866ebf7e22fd0e5d30548272f62c (FETCH_HEAD) 00:00:08.424 > git config core.sparsecheckout # timeout=10 00:00:08.436 > git read-tree -mu HEAD # timeout=10 00:00:08.454 > git checkout -f 3f5fbcceba25866ebf7e22fd0e5d30548272f62c # timeout=5 00:00:08.473 Commit message: "packer: Bump java's version" 00:00:08.473 > git rev-list --no-walk 3f5fbcceba25866ebf7e22fd0e5d30548272f62c # timeout=10 00:00:08.576 [Pipeline] Start of Pipeline 00:00:08.587 [Pipeline] library 00:00:08.588 Loading library shm_lib@master 00:00:08.588 Library shm_lib@master is cached. Copying from home. 00:00:08.603 [Pipeline] node 00:00:08.619 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:00:08.621 [Pipeline] { 00:00:08.630 [Pipeline] catchError 00:00:08.631 [Pipeline] { 00:00:08.641 [Pipeline] wrap 00:00:08.647 [Pipeline] { 00:00:08.652 [Pipeline] stage 00:00:08.653 [Pipeline] { (Prologue) 00:00:08.668 [Pipeline] echo 00:00:08.670 Node: VM-host-SM17 00:00:08.677 [Pipeline] cleanWs 00:00:08.686 [WS-CLEANUP] Deleting project workspace... 00:00:08.686 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.693 [WS-CLEANUP] done 00:00:08.889 [Pipeline] setCustomBuildProperty 00:00:08.964 [Pipeline] httpRequest 00:00:09.344 [Pipeline] echo 00:00:09.345 Sorcerer 10.211.164.101 is alive 00:00:09.351 [Pipeline] retry 00:00:09.353 [Pipeline] { 00:00:09.364 [Pipeline] httpRequest 00:00:09.368 HttpMethod: GET 00:00:09.369 URL: http://10.211.164.101/packages/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:09.369 Sending request to url: http://10.211.164.101/packages/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:09.376 Response Code: HTTP/1.1 200 OK 00:00:09.376 Success: Status code 200 is in the accepted range: 200,404 00:00:09.377 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:16.275 [Pipeline] } 00:00:16.292 [Pipeline] // retry 00:00:16.300 [Pipeline] sh 00:00:16.581 + tar --no-same-owner -xf jbp_3f5fbcceba25866ebf7e22fd0e5d30548272f62c.tar.gz 00:00:16.595 [Pipeline] httpRequest 00:00:17.535 [Pipeline] echo 00:00:17.537 Sorcerer 10.211.164.101 is alive 00:00:17.547 [Pipeline] retry 00:00:17.549 [Pipeline] { 00:00:17.563 [Pipeline] httpRequest 00:00:17.568 HttpMethod: GET 00:00:17.568 URL: http://10.211.164.101/packages/spdk_30f8ce7c55a8640a43a824320d32a3093b9397de.tar.gz 00:00:17.569 Sending request to url: http://10.211.164.101/packages/spdk_30f8ce7c55a8640a43a824320d32a3093b9397de.tar.gz 00:00:17.570 Response Code: HTTP/1.1 200 OK 00:00:17.570 Success: Status code 200 is in the accepted range: 200,404 00:00:17.571 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk_30f8ce7c55a8640a43a824320d32a3093b9397de.tar.gz 00:00:38.891 [Pipeline] } 00:00:38.909 [Pipeline] // retry 00:00:38.916 [Pipeline] sh 00:00:39.197 + tar --no-same-owner -xf spdk_30f8ce7c55a8640a43a824320d32a3093b9397de.tar.gz 00:00:42.495 [Pipeline] sh 00:00:42.774 + git -C spdk log --oneline -n5 00:00:42.774 30f8ce7c5 bdev_ut: Comparison operator and tests fixes 00:00:42.774 3fa316cba test: Comparison operator fixes 00:00:42.774 f999d8912 bdev_xnvme: add support for dataset management 00:00:42.774 95d6c9fac xnvme: bump to 0.7.5 00:00:42.774 3a02df0b1 event: add new 'mappings' parameter to static scheduler 00:00:42.793 [Pipeline] writeFile 00:00:42.808 [Pipeline] sh 00:00:43.087 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:43.099 [Pipeline] sh 00:00:43.376 + cat autorun-spdk.conf 00:00:43.377 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:43.377 SPDK_TEST_NVMF=1 00:00:43.377 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:43.377 SPDK_TEST_URING=1 00:00:43.377 SPDK_TEST_USDT=1 00:00:43.377 SPDK_RUN_UBSAN=1 00:00:43.377 NET_TYPE=virt 00:00:43.377 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:43.382 RUN_NIGHTLY=0 00:00:43.384 [Pipeline] } 00:00:43.399 [Pipeline] // stage 00:00:43.413 [Pipeline] stage 00:00:43.416 [Pipeline] { (Run VM) 00:00:43.426 [Pipeline] sh 00:00:43.699 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:43.699 + echo 'Start stage prepare_nvme.sh' 00:00:43.699 Start stage prepare_nvme.sh 00:00:43.699 + [[ -n 1 ]] 00:00:43.699 + disk_prefix=ex1 00:00:43.699 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 ]] 00:00:43.699 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/autorun-spdk.conf ]] 00:00:43.699 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/autorun-spdk.conf 00:00:43.699 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:43.699 ++ SPDK_TEST_NVMF=1 00:00:43.699 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:43.699 ++ SPDK_TEST_URING=1 00:00:43.699 ++ SPDK_TEST_USDT=1 00:00:43.699 ++ SPDK_RUN_UBSAN=1 00:00:43.699 ++ NET_TYPE=virt 00:00:43.699 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:43.699 ++ RUN_NIGHTLY=0 00:00:43.699 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:00:43.699 + nvme_files=() 00:00:43.699 + declare -A nvme_files 00:00:43.699 + backend_dir=/var/lib/libvirt/images/backends 00:00:43.699 + nvme_files['nvme.img']=5G 00:00:43.699 + nvme_files['nvme-cmb.img']=5G 00:00:43.699 + nvme_files['nvme-multi0.img']=4G 00:00:43.699 + nvme_files['nvme-multi1.img']=4G 00:00:43.699 + nvme_files['nvme-multi2.img']=4G 00:00:43.699 + nvme_files['nvme-openstack.img']=8G 00:00:43.699 + nvme_files['nvme-zns.img']=5G 00:00:43.699 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:43.699 + (( SPDK_TEST_FTL == 1 )) 00:00:43.699 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:43.699 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:43.699 + for nvme in "${!nvme_files[@]}" 00:00:43.699 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:43.699 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:43.699 + for nvme in "${!nvme_files[@]}" 00:00:43.699 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:43.699 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:43.699 + for nvme in "${!nvme_files[@]}" 00:00:43.699 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:43.699 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:43.699 + for nvme in "${!nvme_files[@]}" 00:00:43.699 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:43.699 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:43.699 + for nvme in "${!nvme_files[@]}" 00:00:43.699 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:43.699 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:43.699 + for nvme in "${!nvme_files[@]}" 00:00:43.699 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:43.699 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:43.699 + for nvme in "${!nvme_files[@]}" 00:00:43.699 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:43.699 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:43.699 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:43.699 + echo 'End stage prepare_nvme.sh' 00:00:43.699 End stage prepare_nvme.sh 00:00:43.711 [Pipeline] sh 00:00:43.991 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:43.991 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:00:43.991 00:00:43.991 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk/scripts/vagrant 00:00:43.991 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk 00:00:43.991 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:00:43.991 HELP=0 00:00:43.991 DRY_RUN=0 00:00:43.991 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:00:43.991 NVME_DISKS_TYPE=nvme,nvme, 00:00:43.991 NVME_AUTO_CREATE=0 00:00:43.991 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:00:43.991 NVME_CMB=,, 00:00:43.991 NVME_PMR=,, 00:00:43.991 NVME_ZNS=,, 00:00:43.991 NVME_MS=,, 00:00:43.991 NVME_FDP=,, 00:00:43.991 SPDK_VAGRANT_DISTRO=fedora39 00:00:43.991 SPDK_VAGRANT_VMCPU=10 00:00:43.991 SPDK_VAGRANT_VMRAM=12288 00:00:43.991 SPDK_VAGRANT_PROVIDER=libvirt 00:00:43.991 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:43.991 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:43.991 SPDK_OPENSTACK_NETWORK=0 00:00:43.991 VAGRANT_PACKAGE_BOX=0 00:00:43.991 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:00:43.991 FORCE_DISTRO=true 00:00:43.991 VAGRANT_BOX_VERSION= 00:00:43.991 EXTRA_VAGRANTFILES= 00:00:43.991 NIC_MODEL=e1000 00:00:43.991 00:00:43.991 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt' 00:00:43.991 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3 00:00:47.280 Bringing machine 'default' up with 'libvirt' provider... 00:00:47.846 ==> default: Creating image (snapshot of base box volume). 00:00:48.104 ==> default: Creating domain with the following settings... 00:00:48.104 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728979909_1dc8a1ed3c1ec7f55fe7 00:00:48.104 ==> default: -- Domain type: kvm 00:00:48.104 ==> default: -- Cpus: 10 00:00:48.104 ==> default: -- Feature: acpi 00:00:48.104 ==> default: -- Feature: apic 00:00:48.104 ==> default: -- Feature: pae 00:00:48.104 ==> default: -- Memory: 12288M 00:00:48.104 ==> default: -- Memory Backing: hugepages: 00:00:48.104 ==> default: -- Management MAC: 00:00:48.104 ==> default: -- Loader: 00:00:48.104 ==> default: -- Nvram: 00:00:48.104 ==> default: -- Base box: spdk/fedora39 00:00:48.104 ==> default: -- Storage pool: default 00:00:48.104 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728979909_1dc8a1ed3c1ec7f55fe7.img (20G) 00:00:48.104 ==> default: -- Volume Cache: default 00:00:48.104 ==> default: -- Kernel: 00:00:48.104 ==> default: -- Initrd: 00:00:48.104 ==> default: -- Graphics Type: vnc 00:00:48.104 ==> default: -- Graphics Port: -1 00:00:48.104 ==> default: -- Graphics IP: 127.0.0.1 00:00:48.104 ==> default: -- Graphics Password: Not defined 00:00:48.104 ==> default: -- Video Type: cirrus 00:00:48.104 ==> default: -- Video VRAM: 9216 00:00:48.104 ==> default: -- Sound Type: 00:00:48.104 ==> default: -- Keymap: en-us 00:00:48.104 ==> default: -- TPM Path: 00:00:48.104 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:48.104 ==> default: -- Command line args: 00:00:48.104 ==> default: -> value=-device, 00:00:48.104 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:48.104 ==> default: -> value=-drive, 00:00:48.104 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:00:48.104 ==> default: -> value=-device, 00:00:48.104 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:48.104 ==> default: -> value=-device, 00:00:48.104 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:48.104 ==> default: -> value=-drive, 00:00:48.104 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:48.104 ==> default: -> value=-device, 00:00:48.104 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:48.104 ==> default: -> value=-drive, 00:00:48.104 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:48.104 ==> default: -> value=-device, 00:00:48.104 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:48.104 ==> default: -> value=-drive, 00:00:48.104 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:48.104 ==> default: -> value=-device, 00:00:48.104 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:48.104 ==> default: Creating shared folders metadata... 00:00:48.104 ==> default: Starting domain. 00:00:50.007 ==> default: Waiting for domain to get an IP address... 00:01:08.089 ==> default: Waiting for SSH to become available... 00:01:09.026 ==> default: Configuring and enabling network interfaces... 00:01:13.231 default: SSH address: 192.168.121.140:22 00:01:13.231 default: SSH username: vagrant 00:01:13.231 default: SSH auth method: private key 00:01:15.790 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:23.907 ==> default: Mounting SSHFS shared folder... 00:01:25.283 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:25.283 ==> default: Checking Mount.. 00:01:26.223 ==> default: Folder Successfully Mounted! 00:01:26.223 ==> default: Running provisioner: file... 00:01:27.162 default: ~/.gitconfig => .gitconfig 00:01:27.729 00:01:27.729 SUCCESS! 00:01:27.729 00:01:27.730 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:01:27.730 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:27.730 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:01:27.730 00:01:27.739 [Pipeline] } 00:01:27.754 [Pipeline] // stage 00:01:27.763 [Pipeline] dir 00:01:27.763 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/fedora39-libvirt 00:01:27.765 [Pipeline] { 00:01:27.777 [Pipeline] catchError 00:01:27.779 [Pipeline] { 00:01:27.793 [Pipeline] sh 00:01:28.075 + vagrant ssh-config --host vagrant 00:01:28.075 + sed -ne /^Host/,$p 00:01:28.075 + tee ssh_conf 00:01:32.266 Host vagrant 00:01:32.266 HostName 192.168.121.140 00:01:32.266 User vagrant 00:01:32.266 Port 22 00:01:32.266 UserKnownHostsFile /dev/null 00:01:32.266 StrictHostKeyChecking no 00:01:32.266 PasswordAuthentication no 00:01:32.266 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:32.266 IdentitiesOnly yes 00:01:32.266 LogLevel FATAL 00:01:32.266 ForwardAgent yes 00:01:32.266 ForwardX11 yes 00:01:32.266 00:01:32.279 [Pipeline] withEnv 00:01:32.280 [Pipeline] { 00:01:32.293 [Pipeline] sh 00:01:32.572 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:32.572 source /etc/os-release 00:01:32.572 [[ -e /image.version ]] && img=$(< /image.version) 00:01:32.572 # Minimal, systemd-like check. 00:01:32.572 if [[ -e /.dockerenv ]]; then 00:01:32.572 # Clear garbage from the node's name: 00:01:32.572 # agt-er_autotest_547-896 -> autotest_547-896 00:01:32.572 # $HOSTNAME is the actual container id 00:01:32.572 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:32.572 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:32.572 # We can assume this is a mount from a host where container is running, 00:01:32.572 # so fetch its hostname to easily identify the target swarm worker. 00:01:32.572 container="$(< /etc/hostname) ($agent)" 00:01:32.572 else 00:01:32.572 # Fallback 00:01:32.572 container=$agent 00:01:32.572 fi 00:01:32.572 fi 00:01:32.572 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:32.572 00:01:32.844 [Pipeline] } 00:01:32.861 [Pipeline] // withEnv 00:01:32.870 [Pipeline] setCustomBuildProperty 00:01:32.886 [Pipeline] stage 00:01:32.888 [Pipeline] { (Tests) 00:01:32.906 [Pipeline] sh 00:01:33.259 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:33.271 [Pipeline] sh 00:01:33.547 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:33.821 [Pipeline] timeout 00:01:33.821 Timeout set to expire in 1 hr 0 min 00:01:33.824 [Pipeline] { 00:01:33.840 [Pipeline] sh 00:01:34.121 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:34.689 HEAD is now at 30f8ce7c5 bdev_ut: Comparison operator and tests fixes 00:01:34.702 [Pipeline] sh 00:01:34.982 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:35.254 [Pipeline] sh 00:01:35.534 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:35.809 [Pipeline] sh 00:01:36.089 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:36.369 ++ readlink -f spdk_repo 00:01:36.369 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:36.369 + [[ -n /home/vagrant/spdk_repo ]] 00:01:36.369 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:36.369 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:36.369 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:36.369 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:36.369 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:36.369 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:36.369 + cd /home/vagrant/spdk_repo 00:01:36.369 + source /etc/os-release 00:01:36.369 ++ NAME='Fedora Linux' 00:01:36.369 ++ VERSION='39 (Cloud Edition)' 00:01:36.369 ++ ID=fedora 00:01:36.369 ++ VERSION_ID=39 00:01:36.369 ++ VERSION_CODENAME= 00:01:36.369 ++ PLATFORM_ID=platform:f39 00:01:36.369 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:36.369 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:36.369 ++ LOGO=fedora-logo-icon 00:01:36.369 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:36.369 ++ HOME_URL=https://fedoraproject.org/ 00:01:36.369 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:36.369 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:36.369 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:36.369 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:36.369 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:36.369 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:36.369 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:36.369 ++ SUPPORT_END=2024-11-12 00:01:36.369 ++ VARIANT='Cloud Edition' 00:01:36.369 ++ VARIANT_ID=cloud 00:01:36.369 + uname -a 00:01:36.369 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:36.369 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:36.631 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:36.631 Hugepages 00:01:36.631 node hugesize free / total 00:01:36.890 node0 1048576kB 0 / 0 00:01:36.890 node0 2048kB 0 / 0 00:01:36.890 00:01:36.890 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:36.890 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:36.890 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:36.890 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:01:36.890 + rm -f /tmp/spdk-ld-path 00:01:36.890 + source autorun-spdk.conf 00:01:36.890 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.890 ++ SPDK_TEST_NVMF=1 00:01:36.890 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.890 ++ SPDK_TEST_URING=1 00:01:36.890 ++ SPDK_TEST_USDT=1 00:01:36.890 ++ SPDK_RUN_UBSAN=1 00:01:36.890 ++ NET_TYPE=virt 00:01:36.890 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:36.890 ++ RUN_NIGHTLY=0 00:01:36.890 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:36.890 + [[ -n '' ]] 00:01:36.890 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:36.890 + for M in /var/spdk/build-*-manifest.txt 00:01:36.890 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:36.890 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:36.890 + for M in /var/spdk/build-*-manifest.txt 00:01:36.890 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:36.890 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:36.890 + for M in /var/spdk/build-*-manifest.txt 00:01:36.890 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:36.890 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:36.890 ++ uname 00:01:36.890 + [[ Linux == \L\i\n\u\x ]] 00:01:36.890 + sudo dmesg -T 00:01:36.890 + sudo dmesg --clear 00:01:36.890 + dmesg_pid=5207 00:01:36.890 + [[ Fedora Linux == FreeBSD ]] 00:01:36.890 + sudo dmesg -Tw 00:01:36.890 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:36.890 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:36.890 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:36.890 + [[ -x /usr/src/fio-static/fio ]] 00:01:36.890 + export FIO_BIN=/usr/src/fio-static/fio 00:01:36.890 + FIO_BIN=/usr/src/fio-static/fio 00:01:36.890 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:36.890 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:36.890 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:36.890 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:36.890 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:36.890 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:36.890 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:36.890 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:36.890 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:36.890 Test configuration: 00:01:36.890 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.890 SPDK_TEST_NVMF=1 00:01:36.890 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.890 SPDK_TEST_URING=1 00:01:36.890 SPDK_TEST_USDT=1 00:01:36.890 SPDK_RUN_UBSAN=1 00:01:36.890 NET_TYPE=virt 00:01:36.890 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:37.149 RUN_NIGHTLY=0 08:12:38 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:37.149 08:12:38 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:37.149 08:12:38 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:37.149 08:12:38 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:37.149 08:12:38 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:37.149 08:12:38 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:37.149 08:12:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.149 08:12:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.149 08:12:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.149 08:12:38 -- paths/export.sh@5 -- $ export PATH 00:01:37.149 08:12:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.149 08:12:38 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:37.149 08:12:38 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:37.149 08:12:38 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728979958.XXXXXX 00:01:37.149 08:12:38 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728979958.jOj3PR 00:01:37.149 08:12:38 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:37.149 08:12:38 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:37.149 08:12:38 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:37.149 08:12:38 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:37.149 08:12:38 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:37.149 08:12:38 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:37.149 08:12:38 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:37.149 08:12:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.149 08:12:38 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:37.149 08:12:38 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:37.149 08:12:38 -- pm/common@17 -- $ local monitor 00:01:37.149 08:12:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.149 08:12:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.149 08:12:38 -- pm/common@21 -- $ date +%s 00:01:37.149 08:12:38 -- pm/common@25 -- $ sleep 1 00:01:37.149 08:12:38 -- pm/common@21 -- $ date +%s 00:01:37.149 08:12:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728979958 00:01:37.149 08:12:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728979958 00:01:37.149 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728979958_collect-cpu-load.pm.log 00:01:37.149 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728979958_collect-vmstat.pm.log 00:01:38.086 08:12:39 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:38.086 08:12:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:38.086 08:12:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:38.086 08:12:39 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:38.086 08:12:39 -- spdk/autobuild.sh@16 -- $ date -u 00:01:38.086 Tue Oct 15 08:12:39 AM UTC 2024 00:01:38.086 08:12:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:38.086 v25.01-pre-68-g30f8ce7c5 00:01:38.086 08:12:39 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:38.086 08:12:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:38.086 08:12:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:38.086 08:12:39 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:38.086 08:12:39 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:38.086 08:12:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.086 ************************************ 00:01:38.086 START TEST ubsan 00:01:38.086 ************************************ 00:01:38.086 using ubsan 00:01:38.086 08:12:39 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:38.086 00:01:38.086 real 0m0.000s 00:01:38.086 user 0m0.000s 00:01:38.086 sys 0m0.000s 00:01:38.086 08:12:39 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:38.086 08:12:39 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:38.086 ************************************ 00:01:38.086 END TEST ubsan 00:01:38.086 ************************************ 00:01:38.086 08:12:39 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:38.086 08:12:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:38.086 08:12:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:38.086 08:12:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:38.086 08:12:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:38.086 08:12:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:38.086 08:12:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:38.086 08:12:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:38.086 08:12:39 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:38.345 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:38.345 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:38.603 Using 'verbs' RDMA provider 00:01:52.180 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:07.057 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:07.057 Creating mk/config.mk...done. 00:02:07.057 Creating mk/cc.flags.mk...done. 00:02:07.057 Type 'make' to build. 00:02:07.057 08:13:08 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:07.057 08:13:08 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:07.057 08:13:08 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:07.057 08:13:08 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.057 ************************************ 00:02:07.057 START TEST make 00:02:07.057 ************************************ 00:02:07.057 08:13:08 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:07.057 make[1]: Nothing to be done for 'all'. 00:02:21.921 The Meson build system 00:02:21.921 Version: 1.5.0 00:02:21.921 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:21.921 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:21.921 Build type: native build 00:02:21.921 Program cat found: YES (/usr/bin/cat) 00:02:21.921 Project name: DPDK 00:02:21.921 Project version: 24.03.0 00:02:21.921 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:21.921 C linker for the host machine: cc ld.bfd 2.40-14 00:02:21.921 Host machine cpu family: x86_64 00:02:21.921 Host machine cpu: x86_64 00:02:21.921 Message: ## Building in Developer Mode ## 00:02:21.921 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:21.921 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:21.921 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:21.921 Program python3 found: YES (/usr/bin/python3) 00:02:21.921 Program cat found: YES (/usr/bin/cat) 00:02:21.921 Compiler for C supports arguments -march=native: YES 00:02:21.921 Checking for size of "void *" : 8 00:02:21.921 Checking for size of "void *" : 8 (cached) 00:02:21.921 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:21.921 Library m found: YES 00:02:21.921 Library numa found: YES 00:02:21.921 Has header "numaif.h" : YES 00:02:21.921 Library fdt found: NO 00:02:21.922 Library execinfo found: NO 00:02:21.922 Has header "execinfo.h" : YES 00:02:21.922 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:21.922 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:21.922 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:21.922 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:21.922 Run-time dependency openssl found: YES 3.1.1 00:02:21.922 Run-time dependency libpcap found: YES 1.10.4 00:02:21.922 Has header "pcap.h" with dependency libpcap: YES 00:02:21.922 Compiler for C supports arguments -Wcast-qual: YES 00:02:21.922 Compiler for C supports arguments -Wdeprecated: YES 00:02:21.922 Compiler for C supports arguments -Wformat: YES 00:02:21.922 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:21.922 Compiler for C supports arguments -Wformat-security: NO 00:02:21.922 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:21.922 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:21.922 Compiler for C supports arguments -Wnested-externs: YES 00:02:21.922 Compiler for C supports arguments -Wold-style-definition: YES 00:02:21.922 Compiler for C supports arguments -Wpointer-arith: YES 00:02:21.922 Compiler for C supports arguments -Wsign-compare: YES 00:02:21.922 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:21.922 Compiler for C supports arguments -Wundef: YES 00:02:21.922 Compiler for C supports arguments -Wwrite-strings: YES 00:02:21.922 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:21.922 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:21.922 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:21.922 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:21.922 Program objdump found: YES (/usr/bin/objdump) 00:02:21.922 Compiler for C supports arguments -mavx512f: YES 00:02:21.922 Checking if "AVX512 checking" compiles: YES 00:02:21.922 Fetching value of define "__SSE4_2__" : 1 00:02:21.922 Fetching value of define "__AES__" : 1 00:02:21.922 Fetching value of define "__AVX__" : 1 00:02:21.922 Fetching value of define "__AVX2__" : 1 00:02:21.922 Fetching value of define "__AVX512BW__" : (undefined) 00:02:21.922 Fetching value of define "__AVX512CD__" : (undefined) 00:02:21.922 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:21.922 Fetching value of define "__AVX512F__" : (undefined) 00:02:21.922 Fetching value of define "__AVX512VL__" : (undefined) 00:02:21.922 Fetching value of define "__PCLMUL__" : 1 00:02:21.922 Fetching value of define "__RDRND__" : 1 00:02:21.922 Fetching value of define "__RDSEED__" : 1 00:02:21.922 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:21.922 Fetching value of define "__znver1__" : (undefined) 00:02:21.922 Fetching value of define "__znver2__" : (undefined) 00:02:21.922 Fetching value of define "__znver3__" : (undefined) 00:02:21.922 Fetching value of define "__znver4__" : (undefined) 00:02:21.922 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:21.922 Message: lib/log: Defining dependency "log" 00:02:21.922 Message: lib/kvargs: Defining dependency "kvargs" 00:02:21.922 Message: lib/telemetry: Defining dependency "telemetry" 00:02:21.922 Checking for function "getentropy" : NO 00:02:21.922 Message: lib/eal: Defining dependency "eal" 00:02:21.922 Message: lib/ring: Defining dependency "ring" 00:02:21.922 Message: lib/rcu: Defining dependency "rcu" 00:02:21.922 Message: lib/mempool: Defining dependency "mempool" 00:02:21.922 Message: lib/mbuf: Defining dependency "mbuf" 00:02:21.922 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:21.922 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:21.922 Compiler for C supports arguments -mpclmul: YES 00:02:21.922 Compiler for C supports arguments -maes: YES 00:02:21.922 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:21.922 Compiler for C supports arguments -mavx512bw: YES 00:02:21.922 Compiler for C supports arguments -mavx512dq: YES 00:02:21.922 Compiler for C supports arguments -mavx512vl: YES 00:02:21.922 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:21.922 Compiler for C supports arguments -mavx2: YES 00:02:21.922 Compiler for C supports arguments -mavx: YES 00:02:21.922 Message: lib/net: Defining dependency "net" 00:02:21.922 Message: lib/meter: Defining dependency "meter" 00:02:21.922 Message: lib/ethdev: Defining dependency "ethdev" 00:02:21.922 Message: lib/pci: Defining dependency "pci" 00:02:21.922 Message: lib/cmdline: Defining dependency "cmdline" 00:02:21.922 Message: lib/hash: Defining dependency "hash" 00:02:21.922 Message: lib/timer: Defining dependency "timer" 00:02:21.922 Message: lib/compressdev: Defining dependency "compressdev" 00:02:21.922 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:21.922 Message: lib/dmadev: Defining dependency "dmadev" 00:02:21.922 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:21.922 Message: lib/power: Defining dependency "power" 00:02:21.922 Message: lib/reorder: Defining dependency "reorder" 00:02:21.922 Message: lib/security: Defining dependency "security" 00:02:21.922 Has header "linux/userfaultfd.h" : YES 00:02:21.922 Has header "linux/vduse.h" : YES 00:02:21.922 Message: lib/vhost: Defining dependency "vhost" 00:02:21.922 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:21.922 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:21.922 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:21.922 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:21.922 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:21.922 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:21.922 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:21.922 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:21.922 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:21.922 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:21.922 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:21.922 Configuring doxy-api-html.conf using configuration 00:02:21.922 Configuring doxy-api-man.conf using configuration 00:02:21.922 Program mandb found: YES (/usr/bin/mandb) 00:02:21.922 Program sphinx-build found: NO 00:02:21.922 Configuring rte_build_config.h using configuration 00:02:21.922 Message: 00:02:21.922 ================= 00:02:21.922 Applications Enabled 00:02:21.922 ================= 00:02:21.922 00:02:21.922 apps: 00:02:21.922 00:02:21.922 00:02:21.922 Message: 00:02:21.922 ================= 00:02:21.922 Libraries Enabled 00:02:21.922 ================= 00:02:21.922 00:02:21.922 libs: 00:02:21.922 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:21.922 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:21.922 cryptodev, dmadev, power, reorder, security, vhost, 00:02:21.922 00:02:21.922 Message: 00:02:21.922 =============== 00:02:21.922 Drivers Enabled 00:02:21.922 =============== 00:02:21.922 00:02:21.922 common: 00:02:21.922 00:02:21.922 bus: 00:02:21.922 pci, vdev, 00:02:21.922 mempool: 00:02:21.922 ring, 00:02:21.922 dma: 00:02:21.922 00:02:21.922 net: 00:02:21.922 00:02:21.922 crypto: 00:02:21.922 00:02:21.922 compress: 00:02:21.922 00:02:21.922 vdpa: 00:02:21.922 00:02:21.922 00:02:21.922 Message: 00:02:21.922 ================= 00:02:21.922 Content Skipped 00:02:21.922 ================= 00:02:21.922 00:02:21.922 apps: 00:02:21.922 dumpcap: explicitly disabled via build config 00:02:21.922 graph: explicitly disabled via build config 00:02:21.922 pdump: explicitly disabled via build config 00:02:21.922 proc-info: explicitly disabled via build config 00:02:21.922 test-acl: explicitly disabled via build config 00:02:21.922 test-bbdev: explicitly disabled via build config 00:02:21.922 test-cmdline: explicitly disabled via build config 00:02:21.922 test-compress-perf: explicitly disabled via build config 00:02:21.922 test-crypto-perf: explicitly disabled via build config 00:02:21.922 test-dma-perf: explicitly disabled via build config 00:02:21.922 test-eventdev: explicitly disabled via build config 00:02:21.922 test-fib: explicitly disabled via build config 00:02:21.922 test-flow-perf: explicitly disabled via build config 00:02:21.922 test-gpudev: explicitly disabled via build config 00:02:21.922 test-mldev: explicitly disabled via build config 00:02:21.922 test-pipeline: explicitly disabled via build config 00:02:21.922 test-pmd: explicitly disabled via build config 00:02:21.922 test-regex: explicitly disabled via build config 00:02:21.922 test-sad: explicitly disabled via build config 00:02:21.922 test-security-perf: explicitly disabled via build config 00:02:21.922 00:02:21.922 libs: 00:02:21.922 argparse: explicitly disabled via build config 00:02:21.922 metrics: explicitly disabled via build config 00:02:21.922 acl: explicitly disabled via build config 00:02:21.922 bbdev: explicitly disabled via build config 00:02:21.922 bitratestats: explicitly disabled via build config 00:02:21.922 bpf: explicitly disabled via build config 00:02:21.922 cfgfile: explicitly disabled via build config 00:02:21.922 distributor: explicitly disabled via build config 00:02:21.922 efd: explicitly disabled via build config 00:02:21.922 eventdev: explicitly disabled via build config 00:02:21.922 dispatcher: explicitly disabled via build config 00:02:21.922 gpudev: explicitly disabled via build config 00:02:21.922 gro: explicitly disabled via build config 00:02:21.922 gso: explicitly disabled via build config 00:02:21.922 ip_frag: explicitly disabled via build config 00:02:21.922 jobstats: explicitly disabled via build config 00:02:21.922 latencystats: explicitly disabled via build config 00:02:21.922 lpm: explicitly disabled via build config 00:02:21.922 member: explicitly disabled via build config 00:02:21.922 pcapng: explicitly disabled via build config 00:02:21.922 rawdev: explicitly disabled via build config 00:02:21.922 regexdev: explicitly disabled via build config 00:02:21.922 mldev: explicitly disabled via build config 00:02:21.922 rib: explicitly disabled via build config 00:02:21.922 sched: explicitly disabled via build config 00:02:21.922 stack: explicitly disabled via build config 00:02:21.922 ipsec: explicitly disabled via build config 00:02:21.923 pdcp: explicitly disabled via build config 00:02:21.923 fib: explicitly disabled via build config 00:02:21.923 port: explicitly disabled via build config 00:02:21.923 pdump: explicitly disabled via build config 00:02:21.923 table: explicitly disabled via build config 00:02:21.923 pipeline: explicitly disabled via build config 00:02:21.923 graph: explicitly disabled via build config 00:02:21.923 node: explicitly disabled via build config 00:02:21.923 00:02:21.923 drivers: 00:02:21.923 common/cpt: not in enabled drivers build config 00:02:21.923 common/dpaax: not in enabled drivers build config 00:02:21.923 common/iavf: not in enabled drivers build config 00:02:21.923 common/idpf: not in enabled drivers build config 00:02:21.923 common/ionic: not in enabled drivers build config 00:02:21.923 common/mvep: not in enabled drivers build config 00:02:21.923 common/octeontx: not in enabled drivers build config 00:02:21.923 bus/auxiliary: not in enabled drivers build config 00:02:21.923 bus/cdx: not in enabled drivers build config 00:02:21.923 bus/dpaa: not in enabled drivers build config 00:02:21.923 bus/fslmc: not in enabled drivers build config 00:02:21.923 bus/ifpga: not in enabled drivers build config 00:02:21.923 bus/platform: not in enabled drivers build config 00:02:21.923 bus/uacce: not in enabled drivers build config 00:02:21.923 bus/vmbus: not in enabled drivers build config 00:02:21.923 common/cnxk: not in enabled drivers build config 00:02:21.923 common/mlx5: not in enabled drivers build config 00:02:21.923 common/nfp: not in enabled drivers build config 00:02:21.923 common/nitrox: not in enabled drivers build config 00:02:21.923 common/qat: not in enabled drivers build config 00:02:21.923 common/sfc_efx: not in enabled drivers build config 00:02:21.923 mempool/bucket: not in enabled drivers build config 00:02:21.923 mempool/cnxk: not in enabled drivers build config 00:02:21.923 mempool/dpaa: not in enabled drivers build config 00:02:21.923 mempool/dpaa2: not in enabled drivers build config 00:02:21.923 mempool/octeontx: not in enabled drivers build config 00:02:21.923 mempool/stack: not in enabled drivers build config 00:02:21.923 dma/cnxk: not in enabled drivers build config 00:02:21.923 dma/dpaa: not in enabled drivers build config 00:02:21.923 dma/dpaa2: not in enabled drivers build config 00:02:21.923 dma/hisilicon: not in enabled drivers build config 00:02:21.923 dma/idxd: not in enabled drivers build config 00:02:21.923 dma/ioat: not in enabled drivers build config 00:02:21.923 dma/skeleton: not in enabled drivers build config 00:02:21.923 net/af_packet: not in enabled drivers build config 00:02:21.923 net/af_xdp: not in enabled drivers build config 00:02:21.923 net/ark: not in enabled drivers build config 00:02:21.923 net/atlantic: not in enabled drivers build config 00:02:21.923 net/avp: not in enabled drivers build config 00:02:21.923 net/axgbe: not in enabled drivers build config 00:02:21.923 net/bnx2x: not in enabled drivers build config 00:02:21.923 net/bnxt: not in enabled drivers build config 00:02:21.923 net/bonding: not in enabled drivers build config 00:02:21.923 net/cnxk: not in enabled drivers build config 00:02:21.923 net/cpfl: not in enabled drivers build config 00:02:21.923 net/cxgbe: not in enabled drivers build config 00:02:21.923 net/dpaa: not in enabled drivers build config 00:02:21.923 net/dpaa2: not in enabled drivers build config 00:02:21.923 net/e1000: not in enabled drivers build config 00:02:21.923 net/ena: not in enabled drivers build config 00:02:21.923 net/enetc: not in enabled drivers build config 00:02:21.923 net/enetfec: not in enabled drivers build config 00:02:21.923 net/enic: not in enabled drivers build config 00:02:21.923 net/failsafe: not in enabled drivers build config 00:02:21.923 net/fm10k: not in enabled drivers build config 00:02:21.923 net/gve: not in enabled drivers build config 00:02:21.923 net/hinic: not in enabled drivers build config 00:02:21.923 net/hns3: not in enabled drivers build config 00:02:21.923 net/i40e: not in enabled drivers build config 00:02:21.923 net/iavf: not in enabled drivers build config 00:02:21.923 net/ice: not in enabled drivers build config 00:02:21.923 net/idpf: not in enabled drivers build config 00:02:21.923 net/igc: not in enabled drivers build config 00:02:21.923 net/ionic: not in enabled drivers build config 00:02:21.923 net/ipn3ke: not in enabled drivers build config 00:02:21.923 net/ixgbe: not in enabled drivers build config 00:02:21.923 net/mana: not in enabled drivers build config 00:02:21.923 net/memif: not in enabled drivers build config 00:02:21.923 net/mlx4: not in enabled drivers build config 00:02:21.923 net/mlx5: not in enabled drivers build config 00:02:21.923 net/mvneta: not in enabled drivers build config 00:02:21.923 net/mvpp2: not in enabled drivers build config 00:02:21.923 net/netvsc: not in enabled drivers build config 00:02:21.923 net/nfb: not in enabled drivers build config 00:02:21.923 net/nfp: not in enabled drivers build config 00:02:21.923 net/ngbe: not in enabled drivers build config 00:02:21.923 net/null: not in enabled drivers build config 00:02:21.923 net/octeontx: not in enabled drivers build config 00:02:21.923 net/octeon_ep: not in enabled drivers build config 00:02:21.923 net/pcap: not in enabled drivers build config 00:02:21.923 net/pfe: not in enabled drivers build config 00:02:21.923 net/qede: not in enabled drivers build config 00:02:21.923 net/ring: not in enabled drivers build config 00:02:21.923 net/sfc: not in enabled drivers build config 00:02:21.923 net/softnic: not in enabled drivers build config 00:02:21.923 net/tap: not in enabled drivers build config 00:02:21.923 net/thunderx: not in enabled drivers build config 00:02:21.923 net/txgbe: not in enabled drivers build config 00:02:21.923 net/vdev_netvsc: not in enabled drivers build config 00:02:21.923 net/vhost: not in enabled drivers build config 00:02:21.923 net/virtio: not in enabled drivers build config 00:02:21.923 net/vmxnet3: not in enabled drivers build config 00:02:21.923 raw/*: missing internal dependency, "rawdev" 00:02:21.923 crypto/armv8: not in enabled drivers build config 00:02:21.923 crypto/bcmfs: not in enabled drivers build config 00:02:21.923 crypto/caam_jr: not in enabled drivers build config 00:02:21.923 crypto/ccp: not in enabled drivers build config 00:02:21.923 crypto/cnxk: not in enabled drivers build config 00:02:21.923 crypto/dpaa_sec: not in enabled drivers build config 00:02:21.923 crypto/dpaa2_sec: not in enabled drivers build config 00:02:21.923 crypto/ipsec_mb: not in enabled drivers build config 00:02:21.923 crypto/mlx5: not in enabled drivers build config 00:02:21.923 crypto/mvsam: not in enabled drivers build config 00:02:21.923 crypto/nitrox: not in enabled drivers build config 00:02:21.923 crypto/null: not in enabled drivers build config 00:02:21.923 crypto/octeontx: not in enabled drivers build config 00:02:21.923 crypto/openssl: not in enabled drivers build config 00:02:21.923 crypto/scheduler: not in enabled drivers build config 00:02:21.923 crypto/uadk: not in enabled drivers build config 00:02:21.923 crypto/virtio: not in enabled drivers build config 00:02:21.923 compress/isal: not in enabled drivers build config 00:02:21.923 compress/mlx5: not in enabled drivers build config 00:02:21.923 compress/nitrox: not in enabled drivers build config 00:02:21.923 compress/octeontx: not in enabled drivers build config 00:02:21.923 compress/zlib: not in enabled drivers build config 00:02:21.923 regex/*: missing internal dependency, "regexdev" 00:02:21.923 ml/*: missing internal dependency, "mldev" 00:02:21.923 vdpa/ifc: not in enabled drivers build config 00:02:21.923 vdpa/mlx5: not in enabled drivers build config 00:02:21.923 vdpa/nfp: not in enabled drivers build config 00:02:21.923 vdpa/sfc: not in enabled drivers build config 00:02:21.923 event/*: missing internal dependency, "eventdev" 00:02:21.923 baseband/*: missing internal dependency, "bbdev" 00:02:21.923 gpu/*: missing internal dependency, "gpudev" 00:02:21.923 00:02:21.923 00:02:21.923 Build targets in project: 85 00:02:21.923 00:02:21.923 DPDK 24.03.0 00:02:21.923 00:02:21.923 User defined options 00:02:21.923 buildtype : debug 00:02:21.923 default_library : shared 00:02:21.923 libdir : lib 00:02:21.923 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:21.923 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:21.923 c_link_args : 00:02:21.923 cpu_instruction_set: native 00:02:21.923 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:21.923 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:21.923 enable_docs : false 00:02:21.923 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:21.923 enable_kmods : false 00:02:21.923 max_lcores : 128 00:02:21.923 tests : false 00:02:21.923 00:02:21.923 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:22.490 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:22.490 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:22.750 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:22.750 [3/268] Linking static target lib/librte_kvargs.a 00:02:22.750 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:22.750 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:22.750 [6/268] Linking static target lib/librte_log.a 00:02:23.315 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.315 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:23.315 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:23.315 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:23.315 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:23.574 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:23.574 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:23.832 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:23.832 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:23.832 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:23.832 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:23.832 [18/268] Linking static target lib/librte_telemetry.a 00:02:24.090 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.090 [20/268] Linking target lib/librte_log.so.24.1 00:02:24.347 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:24.347 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:24.347 [23/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:24.605 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:24.605 [25/268] Linking target lib/librte_kvargs.so.24.1 00:02:24.605 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:24.605 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:24.863 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:24.863 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.863 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:24.863 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:24.863 [32/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:24.863 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:24.863 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:25.121 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:25.380 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:25.380 [37/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:25.380 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:25.380 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:25.380 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:25.638 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:25.638 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:25.638 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:25.638 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:25.896 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:25.896 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:26.154 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:26.154 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:26.154 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:26.154 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:26.154 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:26.413 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:26.670 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:26.670 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:26.670 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:26.928 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:26.928 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:26.928 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:26.928 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:27.186 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:27.186 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:27.186 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:27.186 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:27.445 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:27.445 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:27.703 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:27.703 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:27.703 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:27.703 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:27.961 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:27.961 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:27.961 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:27.961 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:27.961 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:27.961 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:27.961 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:28.219 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:28.219 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:28.219 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:28.220 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:28.785 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:28.785 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:28.785 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:28.786 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:28.786 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:28.786 [86/268] Linking static target lib/librte_ring.a 00:02:28.786 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:28.786 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:29.043 [89/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:29.043 [90/268] Linking static target lib/librte_eal.a 00:02:29.300 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:29.300 [92/268] Linking static target lib/librte_rcu.a 00:02:29.300 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.300 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:29.300 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:29.300 [96/268] Linking static target lib/librte_mempool.a 00:02:29.300 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:29.557 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:29.557 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:29.557 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:29.814 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:29.814 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:29.814 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.814 [104/268] Linking static target lib/librte_mbuf.a 00:02:30.072 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:30.072 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:30.072 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:30.330 [108/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:30.330 [109/268] Linking static target lib/librte_net.a 00:02:30.330 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:30.330 [111/268] Linking static target lib/librte_meter.a 00:02:30.589 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:30.589 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:30.845 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.845 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.845 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.845 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:30.845 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:31.102 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.360 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:31.618 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:31.876 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:32.162 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:32.162 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:32.162 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:32.162 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:32.162 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:32.162 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:32.162 [129/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:32.162 [130/268] Linking static target lib/librte_pci.a 00:02:32.162 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:32.162 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:32.162 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:32.162 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:32.422 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:32.422 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:32.422 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.422 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:32.422 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:32.680 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:32.680 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:32.680 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:32.680 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:32.938 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:32.938 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:32.938 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:32.938 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:32.938 [148/268] Linking static target lib/librte_cmdline.a 00:02:32.938 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:32.938 [150/268] Linking static target lib/librte_ethdev.a 00:02:33.197 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:33.455 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:33.455 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:33.713 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:33.713 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:33.713 [156/268] Linking static target lib/librte_hash.a 00:02:33.713 [157/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:33.713 [158/268] Linking static target lib/librte_timer.a 00:02:33.971 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:33.971 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:33.971 [161/268] Linking static target lib/librte_compressdev.a 00:02:34.228 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:34.228 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:34.228 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:34.487 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.487 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:34.744 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:34.744 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:34.744 [169/268] Linking static target lib/librte_dmadev.a 00:02:34.744 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.744 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:35.002 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:35.002 [173/268] Linking static target lib/librte_cryptodev.a 00:02:35.002 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:35.002 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.002 [176/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:35.002 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.261 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:35.519 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:35.777 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:35.777 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:35.777 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.777 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:35.777 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:36.036 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:36.036 [186/268] Linking static target lib/librte_power.a 00:02:36.299 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:36.299 [188/268] Linking static target lib/librte_reorder.a 00:02:36.560 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:36.560 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:36.818 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:36.818 [192/268] Linking static target lib/librte_security.a 00:02:36.818 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:36.818 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.818 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:37.782 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.782 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:37.782 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:37.782 [199/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.782 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:37.782 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.782 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:38.349 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:38.349 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:38.608 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:38.608 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:38.866 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:38.866 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:38.866 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:38.866 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:38.866 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:38.866 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:39.124 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:39.124 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:39.124 [215/268] Linking static target drivers/librte_bus_vdev.a 00:02:39.124 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:39.381 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:39.381 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:39.381 [219/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:39.381 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:39.381 [221/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:39.381 [222/268] Linking static target drivers/librte_bus_pci.a 00:02:39.639 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:39.639 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:39.639 [225/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.639 [226/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:39.639 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:39.898 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.938 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:40.938 [230/268] Linking static target lib/librte_vhost.a 00:02:41.508 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.508 [232/268] Linking target lib/librte_eal.so.24.1 00:02:41.765 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:41.765 [234/268] Linking target lib/librte_dmadev.so.24.1 00:02:41.765 [235/268] Linking target lib/librte_timer.so.24.1 00:02:41.765 [236/268] Linking target lib/librte_meter.so.24.1 00:02:41.765 [237/268] Linking target lib/librte_ring.so.24.1 00:02:41.765 [238/268] Linking target lib/librte_pci.so.24.1 00:02:41.765 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:42.023 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:42.023 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:42.023 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:42.023 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:42.023 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:42.023 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:42.023 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:42.023 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:42.281 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.281 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:42.281 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:42.281 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:42.281 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:42.538 [253/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.538 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:42.538 [255/268] Linking target lib/librte_net.so.24.1 00:02:42.538 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:42.538 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:42.538 [258/268] Linking target lib/librte_compressdev.so.24.1 00:02:42.538 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:42.538 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:42.796 [261/268] Linking target lib/librte_security.so.24.1 00:02:42.796 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:42.796 [263/268] Linking target lib/librte_hash.so.24.1 00:02:42.796 [264/268] Linking target lib/librte_cmdline.so.24.1 00:02:42.796 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:43.054 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:43.054 [267/268] Linking target lib/librte_power.so.24.1 00:02:43.054 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:43.054 INFO: autodetecting backend as ninja 00:02:43.054 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:15.119 CC lib/ut_mock/mock.o 00:03:15.119 CC lib/ut/ut.o 00:03:15.119 CC lib/log/log_flags.o 00:03:15.119 CC lib/log/log.o 00:03:15.119 CC lib/log/log_deprecated.o 00:03:15.119 LIB libspdk_ut.a 00:03:15.119 LIB libspdk_log.a 00:03:15.119 LIB libspdk_ut_mock.a 00:03:15.119 SO libspdk_ut.so.2.0 00:03:15.119 SO libspdk_ut_mock.so.6.0 00:03:15.119 SO libspdk_log.so.7.1 00:03:15.119 SYMLINK libspdk_log.so 00:03:15.119 SYMLINK libspdk_ut.so 00:03:15.119 SYMLINK libspdk_ut_mock.so 00:03:15.119 CC lib/ioat/ioat.o 00:03:15.119 CC lib/util/base64.o 00:03:15.119 CXX lib/trace_parser/trace.o 00:03:15.119 CC lib/util/bit_array.o 00:03:15.119 CC lib/util/cpuset.o 00:03:15.119 CC lib/util/crc32.o 00:03:15.119 CC lib/util/crc16.o 00:03:15.119 CC lib/util/crc32c.o 00:03:15.119 CC lib/dma/dma.o 00:03:15.119 CC lib/vfio_user/host/vfio_user_pci.o 00:03:15.119 CC lib/vfio_user/host/vfio_user.o 00:03:15.119 CC lib/util/crc32_ieee.o 00:03:15.119 CC lib/util/crc64.o 00:03:15.119 CC lib/util/dif.o 00:03:15.119 CC lib/util/fd.o 00:03:15.119 CC lib/util/fd_group.o 00:03:15.119 LIB libspdk_dma.a 00:03:15.119 LIB libspdk_ioat.a 00:03:15.119 SO libspdk_dma.so.5.0 00:03:15.119 SO libspdk_ioat.so.7.0 00:03:15.119 CC lib/util/file.o 00:03:15.119 CC lib/util/hexlify.o 00:03:15.119 CC lib/util/iov.o 00:03:15.119 SYMLINK libspdk_dma.so 00:03:15.119 CC lib/util/math.o 00:03:15.119 SYMLINK libspdk_ioat.so 00:03:15.119 CC lib/util/net.o 00:03:15.119 CC lib/util/pipe.o 00:03:15.119 LIB libspdk_vfio_user.a 00:03:15.119 SO libspdk_vfio_user.so.5.0 00:03:15.119 CC lib/util/strerror_tls.o 00:03:15.119 SYMLINK libspdk_vfio_user.so 00:03:15.119 CC lib/util/string.o 00:03:15.119 CC lib/util/uuid.o 00:03:15.119 CC lib/util/xor.o 00:03:15.119 CC lib/util/zipf.o 00:03:15.119 CC lib/util/md5.o 00:03:15.119 LIB libspdk_util.a 00:03:15.119 SO libspdk_util.so.10.0 00:03:15.119 LIB libspdk_trace_parser.a 00:03:15.119 SYMLINK libspdk_util.so 00:03:15.119 SO libspdk_trace_parser.so.6.0 00:03:15.119 SYMLINK libspdk_trace_parser.so 00:03:15.119 CC lib/json/json_parse.o 00:03:15.119 CC lib/json/json_util.o 00:03:15.119 CC lib/rdma_provider/common.o 00:03:15.119 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:15.119 CC lib/json/json_write.o 00:03:15.119 CC lib/rdma_utils/rdma_utils.o 00:03:15.119 CC lib/conf/conf.o 00:03:15.119 CC lib/idxd/idxd.o 00:03:15.119 CC lib/vmd/vmd.o 00:03:15.119 CC lib/env_dpdk/env.o 00:03:15.119 CC lib/env_dpdk/memory.o 00:03:15.119 LIB libspdk_conf.a 00:03:15.119 CC lib/env_dpdk/pci.o 00:03:15.119 CC lib/env_dpdk/init.o 00:03:15.119 SO libspdk_conf.so.6.0 00:03:15.119 LIB libspdk_rdma_provider.a 00:03:15.119 LIB libspdk_rdma_utils.a 00:03:15.119 SO libspdk_rdma_provider.so.6.0 00:03:15.119 SYMLINK libspdk_conf.so 00:03:15.119 SO libspdk_rdma_utils.so.1.0 00:03:15.119 LIB libspdk_json.a 00:03:15.119 CC lib/env_dpdk/threads.o 00:03:15.119 SO libspdk_json.so.6.0 00:03:15.119 SYMLINK libspdk_rdma_provider.so 00:03:15.119 SYMLINK libspdk_rdma_utils.so 00:03:15.119 CC lib/env_dpdk/pci_ioat.o 00:03:15.119 CC lib/vmd/led.o 00:03:15.119 SYMLINK libspdk_json.so 00:03:15.119 CC lib/idxd/idxd_user.o 00:03:15.119 CC lib/env_dpdk/pci_virtio.o 00:03:15.119 CC lib/env_dpdk/pci_vmd.o 00:03:15.119 CC lib/idxd/idxd_kernel.o 00:03:15.119 CC lib/env_dpdk/pci_idxd.o 00:03:15.119 CC lib/env_dpdk/pci_event.o 00:03:15.119 CC lib/env_dpdk/sigbus_handler.o 00:03:15.119 CC lib/env_dpdk/pci_dpdk.o 00:03:15.119 LIB libspdk_vmd.a 00:03:15.119 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:15.119 SO libspdk_vmd.so.6.0 00:03:15.119 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:15.119 SYMLINK libspdk_vmd.so 00:03:15.119 LIB libspdk_idxd.a 00:03:15.119 SO libspdk_idxd.so.12.1 00:03:15.119 SYMLINK libspdk_idxd.so 00:03:15.119 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:15.119 CC lib/jsonrpc/jsonrpc_server.o 00:03:15.119 CC lib/jsonrpc/jsonrpc_client.o 00:03:15.119 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:15.119 LIB libspdk_jsonrpc.a 00:03:15.119 SO libspdk_jsonrpc.so.6.0 00:03:15.119 SYMLINK libspdk_jsonrpc.so 00:03:15.119 LIB libspdk_env_dpdk.a 00:03:15.119 SO libspdk_env_dpdk.so.15.0 00:03:15.119 CC lib/rpc/rpc.o 00:03:15.119 SYMLINK libspdk_env_dpdk.so 00:03:15.377 LIB libspdk_rpc.a 00:03:15.377 SO libspdk_rpc.so.6.0 00:03:15.378 SYMLINK libspdk_rpc.so 00:03:15.635 CC lib/trace/trace.o 00:03:15.635 CC lib/trace/trace_flags.o 00:03:15.635 CC lib/trace/trace_rpc.o 00:03:15.635 CC lib/notify/notify.o 00:03:15.635 CC lib/notify/notify_rpc.o 00:03:15.635 CC lib/keyring/keyring.o 00:03:15.635 CC lib/keyring/keyring_rpc.o 00:03:15.893 LIB libspdk_notify.a 00:03:15.893 SO libspdk_notify.so.6.0 00:03:15.893 LIB libspdk_keyring.a 00:03:15.893 LIB libspdk_trace.a 00:03:15.893 SO libspdk_keyring.so.2.0 00:03:15.893 SO libspdk_trace.so.11.0 00:03:15.893 SYMLINK libspdk_notify.so 00:03:16.151 SYMLINK libspdk_keyring.so 00:03:16.151 SYMLINK libspdk_trace.so 00:03:16.407 CC lib/thread/thread.o 00:03:16.407 CC lib/thread/iobuf.o 00:03:16.407 CC lib/sock/sock.o 00:03:16.407 CC lib/sock/sock_rpc.o 00:03:16.972 LIB libspdk_sock.a 00:03:16.972 SO libspdk_sock.so.10.0 00:03:16.972 SYMLINK libspdk_sock.so 00:03:17.229 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:17.229 CC lib/nvme/nvme_ctrlr.o 00:03:17.229 CC lib/nvme/nvme_fabric.o 00:03:17.229 CC lib/nvme/nvme_ns_cmd.o 00:03:17.229 CC lib/nvme/nvme_pcie_common.o 00:03:17.229 CC lib/nvme/nvme_ns.o 00:03:17.229 CC lib/nvme/nvme_pcie.o 00:03:17.229 CC lib/nvme/nvme.o 00:03:17.229 CC lib/nvme/nvme_qpair.o 00:03:18.159 LIB libspdk_thread.a 00:03:18.159 SO libspdk_thread.so.10.2 00:03:18.159 CC lib/nvme/nvme_quirks.o 00:03:18.159 CC lib/nvme/nvme_transport.o 00:03:18.159 SYMLINK libspdk_thread.so 00:03:18.159 CC lib/nvme/nvme_discovery.o 00:03:18.159 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:18.159 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:18.417 CC lib/nvme/nvme_tcp.o 00:03:18.417 CC lib/nvme/nvme_opal.o 00:03:18.417 CC lib/nvme/nvme_io_msg.o 00:03:18.676 CC lib/nvme/nvme_poll_group.o 00:03:18.676 CC lib/nvme/nvme_zns.o 00:03:18.933 CC lib/nvme/nvme_stubs.o 00:03:18.933 CC lib/nvme/nvme_auth.o 00:03:18.933 CC lib/accel/accel.o 00:03:19.191 CC lib/nvme/nvme_cuse.o 00:03:19.448 CC lib/nvme/nvme_rdma.o 00:03:19.448 CC lib/accel/accel_rpc.o 00:03:19.448 CC lib/accel/accel_sw.o 00:03:19.706 CC lib/blob/blobstore.o 00:03:19.706 CC lib/init/json_config.o 00:03:19.706 CC lib/init/subsystem.o 00:03:19.706 CC lib/init/subsystem_rpc.o 00:03:19.964 CC lib/init/rpc.o 00:03:19.964 CC lib/blob/request.o 00:03:19.964 CC lib/blob/zeroes.o 00:03:19.964 CC lib/blob/blob_bs_dev.o 00:03:20.222 LIB libspdk_init.a 00:03:20.222 CC lib/virtio/virtio.o 00:03:20.222 CC lib/virtio/virtio_vfio_user.o 00:03:20.222 CC lib/virtio/virtio_vhost_user.o 00:03:20.222 SO libspdk_init.so.6.0 00:03:20.222 SYMLINK libspdk_init.so 00:03:20.222 CC lib/virtio/virtio_pci.o 00:03:20.222 CC lib/fsdev/fsdev.o 00:03:20.222 CC lib/fsdev/fsdev_io.o 00:03:20.480 LIB libspdk_accel.a 00:03:20.480 SO libspdk_accel.so.16.0 00:03:20.480 CC lib/fsdev/fsdev_rpc.o 00:03:20.480 SYMLINK libspdk_accel.so 00:03:20.480 LIB libspdk_virtio.a 00:03:20.737 CC lib/event/app.o 00:03:20.737 CC lib/event/reactor.o 00:03:20.737 CC lib/event/log_rpc.o 00:03:20.737 SO libspdk_virtio.so.7.0 00:03:20.737 CC lib/bdev/bdev.o 00:03:20.737 SYMLINK libspdk_virtio.so 00:03:20.737 CC lib/event/app_rpc.o 00:03:20.737 CC lib/event/scheduler_static.o 00:03:20.737 CC lib/bdev/bdev_rpc.o 00:03:20.995 LIB libspdk_nvme.a 00:03:20.995 CC lib/bdev/bdev_zone.o 00:03:20.995 CC lib/bdev/part.o 00:03:20.995 LIB libspdk_fsdev.a 00:03:20.995 SO libspdk_fsdev.so.1.0 00:03:20.995 CC lib/bdev/scsi_nvme.o 00:03:20.995 LIB libspdk_event.a 00:03:20.995 SYMLINK libspdk_fsdev.so 00:03:20.995 SO libspdk_nvme.so.14.0 00:03:21.253 SO libspdk_event.so.14.0 00:03:21.253 SYMLINK libspdk_event.so 00:03:21.253 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:21.253 SYMLINK libspdk_nvme.so 00:03:22.188 LIB libspdk_fuse_dispatcher.a 00:03:22.188 SO libspdk_fuse_dispatcher.so.1.0 00:03:22.188 SYMLINK libspdk_fuse_dispatcher.so 00:03:23.123 LIB libspdk_blob.a 00:03:23.123 SO libspdk_blob.so.11.0 00:03:23.123 SYMLINK libspdk_blob.so 00:03:23.381 CC lib/blobfs/tree.o 00:03:23.381 CC lib/blobfs/blobfs.o 00:03:23.381 CC lib/lvol/lvol.o 00:03:23.651 LIB libspdk_bdev.a 00:03:23.651 SO libspdk_bdev.so.17.0 00:03:23.651 SYMLINK libspdk_bdev.so 00:03:23.909 CC lib/ftl/ftl_core.o 00:03:23.909 CC lib/ftl/ftl_init.o 00:03:23.909 CC lib/ublk/ublk.o 00:03:23.909 CC lib/ftl/ftl_layout.o 00:03:23.909 CC lib/ublk/ublk_rpc.o 00:03:23.909 CC lib/nvmf/ctrlr.o 00:03:23.909 CC lib/scsi/dev.o 00:03:23.909 CC lib/nbd/nbd.o 00:03:24.168 CC lib/nvmf/ctrlr_discovery.o 00:03:24.168 CC lib/scsi/lun.o 00:03:24.426 CC lib/nvmf/ctrlr_bdev.o 00:03:24.426 CC lib/nvmf/subsystem.o 00:03:24.426 LIB libspdk_lvol.a 00:03:24.426 CC lib/nbd/nbd_rpc.o 00:03:24.426 LIB libspdk_blobfs.a 00:03:24.426 SO libspdk_lvol.so.10.0 00:03:24.426 SO libspdk_blobfs.so.10.0 00:03:24.426 SYMLINK libspdk_lvol.so 00:03:24.684 CC lib/nvmf/nvmf.o 00:03:24.684 SYMLINK libspdk_blobfs.so 00:03:24.684 CC lib/nvmf/nvmf_rpc.o 00:03:24.684 CC lib/ftl/ftl_debug.o 00:03:24.684 CC lib/scsi/port.o 00:03:24.684 LIB libspdk_ublk.a 00:03:24.684 SO libspdk_ublk.so.3.0 00:03:24.684 CC lib/scsi/scsi.o 00:03:24.684 SYMLINK libspdk_ublk.so 00:03:24.684 CC lib/scsi/scsi_bdev.o 00:03:24.684 CC lib/scsi/scsi_pr.o 00:03:24.684 LIB libspdk_nbd.a 00:03:24.942 CC lib/ftl/ftl_io.o 00:03:24.942 SO libspdk_nbd.so.7.0 00:03:24.942 CC lib/scsi/scsi_rpc.o 00:03:24.942 SYMLINK libspdk_nbd.so 00:03:24.942 CC lib/scsi/task.o 00:03:24.942 CC lib/nvmf/transport.o 00:03:25.200 CC lib/ftl/ftl_sb.o 00:03:25.200 CC lib/ftl/ftl_l2p.o 00:03:25.200 CC lib/nvmf/tcp.o 00:03:25.200 CC lib/ftl/ftl_l2p_flat.o 00:03:25.458 CC lib/ftl/ftl_nv_cache.o 00:03:25.458 LIB libspdk_scsi.a 00:03:25.458 CC lib/nvmf/stubs.o 00:03:25.458 SO libspdk_scsi.so.9.0 00:03:25.458 CC lib/ftl/ftl_band.o 00:03:25.716 SYMLINK libspdk_scsi.so 00:03:25.716 CC lib/ftl/ftl_band_ops.o 00:03:25.716 CC lib/nvmf/mdns_server.o 00:03:25.716 CC lib/nvmf/rdma.o 00:03:25.716 CC lib/nvmf/auth.o 00:03:25.974 CC lib/ftl/ftl_writer.o 00:03:25.974 CC lib/ftl/ftl_rq.o 00:03:25.974 CC lib/iscsi/conn.o 00:03:26.232 CC lib/ftl/ftl_reloc.o 00:03:26.232 CC lib/vhost/vhost.o 00:03:26.232 CC lib/vhost/vhost_rpc.o 00:03:26.232 CC lib/ftl/ftl_l2p_cache.o 00:03:26.489 CC lib/ftl/ftl_p2l.o 00:03:26.489 CC lib/ftl/ftl_p2l_log.o 00:03:26.747 CC lib/vhost/vhost_scsi.o 00:03:26.747 CC lib/iscsi/init_grp.o 00:03:26.747 CC lib/iscsi/iscsi.o 00:03:26.747 CC lib/iscsi/param.o 00:03:27.004 CC lib/vhost/vhost_blk.o 00:03:27.004 CC lib/vhost/rte_vhost_user.o 00:03:27.004 CC lib/ftl/mngt/ftl_mngt.o 00:03:27.004 CC lib/iscsi/portal_grp.o 00:03:27.262 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:27.262 CC lib/iscsi/tgt_node.o 00:03:27.262 CC lib/iscsi/iscsi_subsystem.o 00:03:27.262 CC lib/iscsi/iscsi_rpc.o 00:03:27.520 CC lib/iscsi/task.o 00:03:27.520 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:27.520 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:27.778 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:27.778 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:27.778 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:27.778 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:27.778 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:27.778 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:28.037 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:28.037 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:28.037 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:28.037 CC lib/ftl/utils/ftl_conf.o 00:03:28.037 CC lib/ftl/utils/ftl_md.o 00:03:28.037 LIB libspdk_vhost.a 00:03:28.037 CC lib/ftl/utils/ftl_mempool.o 00:03:28.037 SO libspdk_vhost.so.8.0 00:03:28.294 CC lib/ftl/utils/ftl_bitmap.o 00:03:28.294 LIB libspdk_nvmf.a 00:03:28.294 CC lib/ftl/utils/ftl_property.o 00:03:28.294 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:28.294 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:28.294 SYMLINK libspdk_vhost.so 00:03:28.294 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:28.294 LIB libspdk_iscsi.a 00:03:28.294 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:28.294 SO libspdk_nvmf.so.19.0 00:03:28.552 SO libspdk_iscsi.so.8.0 00:03:28.552 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:28.552 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:28.552 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:28.552 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:28.552 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:28.552 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:28.552 SYMLINK libspdk_iscsi.so 00:03:28.552 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:28.552 SYMLINK libspdk_nvmf.so 00:03:28.552 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:28.552 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:28.552 CC lib/ftl/base/ftl_base_dev.o 00:03:28.552 CC lib/ftl/base/ftl_base_bdev.o 00:03:28.810 CC lib/ftl/ftl_trace.o 00:03:29.068 LIB libspdk_ftl.a 00:03:29.326 SO libspdk_ftl.so.9.0 00:03:29.584 SYMLINK libspdk_ftl.so 00:03:30.151 CC module/env_dpdk/env_dpdk_rpc.o 00:03:30.151 CC module/keyring/linux/keyring.o 00:03:30.151 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:30.151 CC module/sock/posix/posix.o 00:03:30.151 CC module/blob/bdev/blob_bdev.o 00:03:30.151 CC module/scheduler/gscheduler/gscheduler.o 00:03:30.151 CC module/accel/error/accel_error.o 00:03:30.151 CC module/fsdev/aio/fsdev_aio.o 00:03:30.151 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:30.151 CC module/keyring/file/keyring.o 00:03:30.151 LIB libspdk_env_dpdk_rpc.a 00:03:30.151 SO libspdk_env_dpdk_rpc.so.6.0 00:03:30.151 CC module/keyring/linux/keyring_rpc.o 00:03:30.408 SYMLINK libspdk_env_dpdk_rpc.so 00:03:30.408 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:30.408 CC module/accel/error/accel_error_rpc.o 00:03:30.408 LIB libspdk_scheduler_dynamic.a 00:03:30.408 LIB libspdk_scheduler_dpdk_governor.a 00:03:30.408 LIB libspdk_blob_bdev.a 00:03:30.408 LIB libspdk_scheduler_gscheduler.a 00:03:30.408 SO libspdk_blob_bdev.so.11.0 00:03:30.408 SO libspdk_scheduler_dynamic.so.4.0 00:03:30.408 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:30.408 CC module/keyring/file/keyring_rpc.o 00:03:30.408 LIB libspdk_keyring_linux.a 00:03:30.408 SO libspdk_scheduler_gscheduler.so.4.0 00:03:30.408 SO libspdk_keyring_linux.so.1.0 00:03:30.408 SYMLINK libspdk_scheduler_dynamic.so 00:03:30.408 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:30.408 CC module/fsdev/aio/linux_aio_mgr.o 00:03:30.408 SYMLINK libspdk_blob_bdev.so 00:03:30.408 SYMLINK libspdk_scheduler_gscheduler.so 00:03:30.667 SYMLINK libspdk_keyring_linux.so 00:03:30.667 LIB libspdk_keyring_file.a 00:03:30.667 LIB libspdk_accel_error.a 00:03:30.667 SO libspdk_keyring_file.so.2.0 00:03:30.667 SO libspdk_accel_error.so.2.0 00:03:30.667 SYMLINK libspdk_keyring_file.so 00:03:30.667 CC module/sock/uring/uring.o 00:03:30.667 SYMLINK libspdk_accel_error.so 00:03:30.667 CC module/accel/ioat/accel_ioat.o 00:03:30.667 CC module/accel/dsa/accel_dsa.o 00:03:30.667 LIB libspdk_fsdev_aio.a 00:03:30.925 CC module/accel/iaa/accel_iaa.o 00:03:30.925 SO libspdk_fsdev_aio.so.1.0 00:03:30.925 CC module/bdev/delay/vbdev_delay.o 00:03:30.925 SYMLINK libspdk_fsdev_aio.so 00:03:30.925 CC module/accel/iaa/accel_iaa_rpc.o 00:03:30.925 CC module/bdev/gpt/gpt.o 00:03:30.925 CC module/bdev/error/vbdev_error.o 00:03:30.925 CC module/bdev/lvol/vbdev_lvol.o 00:03:30.925 CC module/accel/ioat/accel_ioat_rpc.o 00:03:30.925 LIB libspdk_sock_posix.a 00:03:30.925 CC module/bdev/error/vbdev_error_rpc.o 00:03:31.183 SO libspdk_sock_posix.so.6.0 00:03:31.183 LIB libspdk_accel_iaa.a 00:03:31.183 LIB libspdk_accel_ioat.a 00:03:31.183 CC module/accel/dsa/accel_dsa_rpc.o 00:03:31.183 SO libspdk_accel_iaa.so.3.0 00:03:31.183 SO libspdk_accel_ioat.so.6.0 00:03:31.183 CC module/bdev/gpt/vbdev_gpt.o 00:03:31.183 SYMLINK libspdk_sock_posix.so 00:03:31.183 SYMLINK libspdk_accel_ioat.so 00:03:31.183 SYMLINK libspdk_accel_iaa.so 00:03:31.442 LIB libspdk_bdev_error.a 00:03:31.442 SO libspdk_bdev_error.so.6.0 00:03:31.442 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:31.442 CC module/bdev/malloc/bdev_malloc.o 00:03:31.442 LIB libspdk_accel_dsa.a 00:03:31.442 SYMLINK libspdk_bdev_error.so 00:03:31.442 CC module/bdev/null/bdev_null.o 00:03:31.442 CC module/bdev/nvme/bdev_nvme.o 00:03:31.442 SO libspdk_accel_dsa.so.5.0 00:03:31.442 CC module/blobfs/bdev/blobfs_bdev.o 00:03:31.442 SYMLINK libspdk_accel_dsa.so 00:03:31.442 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:31.699 LIB libspdk_bdev_delay.a 00:03:31.699 LIB libspdk_bdev_gpt.a 00:03:31.699 SO libspdk_bdev_delay.so.6.0 00:03:31.699 CC module/bdev/passthru/vbdev_passthru.o 00:03:31.699 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:31.699 SO libspdk_bdev_gpt.so.6.0 00:03:31.699 SYMLINK libspdk_bdev_delay.so 00:03:31.699 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:31.699 CC module/bdev/null/bdev_null_rpc.o 00:03:31.699 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:31.699 LIB libspdk_sock_uring.a 00:03:31.699 SYMLINK libspdk_bdev_gpt.so 00:03:31.699 SO libspdk_sock_uring.so.5.0 00:03:31.957 SYMLINK libspdk_sock_uring.so 00:03:31.957 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:31.957 LIB libspdk_bdev_lvol.a 00:03:31.957 LIB libspdk_blobfs_bdev.a 00:03:31.957 LIB libspdk_bdev_null.a 00:03:31.957 SO libspdk_bdev_lvol.so.6.0 00:03:31.957 CC module/bdev/nvme/nvme_rpc.o 00:03:31.957 SO libspdk_bdev_null.so.6.0 00:03:31.957 CC module/bdev/raid/bdev_raid.o 00:03:31.957 SO libspdk_blobfs_bdev.so.6.0 00:03:32.215 SYMLINK libspdk_bdev_lvol.so 00:03:32.215 LIB libspdk_bdev_malloc.a 00:03:32.215 CC module/bdev/nvme/bdev_mdns_client.o 00:03:32.215 SYMLINK libspdk_bdev_null.so 00:03:32.215 CC module/bdev/nvme/vbdev_opal.o 00:03:32.215 SYMLINK libspdk_blobfs_bdev.so 00:03:32.215 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:32.215 SO libspdk_bdev_malloc.so.6.0 00:03:32.215 LIB libspdk_bdev_passthru.a 00:03:32.215 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:32.215 SYMLINK libspdk_bdev_malloc.so 00:03:32.215 CC module/bdev/raid/bdev_raid_rpc.o 00:03:32.215 CC module/bdev/raid/bdev_raid_sb.o 00:03:32.215 SO libspdk_bdev_passthru.so.6.0 00:03:32.473 SYMLINK libspdk_bdev_passthru.so 00:03:32.731 CC module/bdev/raid/raid0.o 00:03:32.731 CC module/bdev/raid/raid1.o 00:03:32.731 CC module/bdev/split/vbdev_split.o 00:03:32.731 CC module/bdev/raid/concat.o 00:03:32.731 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:32.731 CC module/bdev/uring/bdev_uring.o 00:03:32.731 CC module/bdev/aio/bdev_aio.o 00:03:32.989 CC module/bdev/ftl/bdev_ftl.o 00:03:32.989 CC module/bdev/split/vbdev_split_rpc.o 00:03:32.989 CC module/bdev/uring/bdev_uring_rpc.o 00:03:32.989 CC module/bdev/aio/bdev_aio_rpc.o 00:03:32.989 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:32.989 LIB libspdk_bdev_split.a 00:03:32.989 SO libspdk_bdev_split.so.6.0 00:03:33.247 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:33.247 SYMLINK libspdk_bdev_split.so 00:03:33.247 LIB libspdk_bdev_uring.a 00:03:33.247 SO libspdk_bdev_uring.so.6.0 00:03:33.247 LIB libspdk_bdev_zone_block.a 00:03:33.505 CC module/bdev/iscsi/bdev_iscsi.o 00:03:33.505 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:33.505 LIB libspdk_bdev_raid.a 00:03:33.505 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:33.505 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:33.505 SO libspdk_bdev_zone_block.so.6.0 00:03:33.505 SYMLINK libspdk_bdev_uring.so 00:03:33.505 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:33.505 LIB libspdk_bdev_aio.a 00:03:33.505 SO libspdk_bdev_raid.so.6.0 00:03:33.505 SYMLINK libspdk_bdev_zone_block.so 00:03:33.505 LIB libspdk_bdev_ftl.a 00:03:33.505 SO libspdk_bdev_aio.so.6.0 00:03:33.505 SO libspdk_bdev_ftl.so.6.0 00:03:33.505 SYMLINK libspdk_bdev_raid.so 00:03:33.505 SYMLINK libspdk_bdev_aio.so 00:03:33.505 SYMLINK libspdk_bdev_ftl.so 00:03:33.763 LIB libspdk_bdev_iscsi.a 00:03:33.763 SO libspdk_bdev_iscsi.so.6.0 00:03:34.022 SYMLINK libspdk_bdev_iscsi.so 00:03:34.022 LIB libspdk_bdev_virtio.a 00:03:34.022 LIB libspdk_bdev_nvme.a 00:03:34.022 SO libspdk_bdev_virtio.so.6.0 00:03:34.022 SO libspdk_bdev_nvme.so.7.0 00:03:34.022 SYMLINK libspdk_bdev_virtio.so 00:03:34.280 SYMLINK libspdk_bdev_nvme.so 00:03:34.848 CC module/event/subsystems/sock/sock.o 00:03:34.848 CC module/event/subsystems/fsdev/fsdev.o 00:03:34.848 CC module/event/subsystems/vmd/vmd.o 00:03:34.848 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:34.848 CC module/event/subsystems/iobuf/iobuf.o 00:03:34.848 CC module/event/subsystems/keyring/keyring.o 00:03:34.848 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:34.848 CC module/event/subsystems/scheduler/scheduler.o 00:03:34.848 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:34.848 LIB libspdk_event_keyring.a 00:03:34.848 LIB libspdk_event_scheduler.a 00:03:34.848 LIB libspdk_event_iobuf.a 00:03:34.848 SO libspdk_event_keyring.so.1.0 00:03:34.848 SO libspdk_event_scheduler.so.4.0 00:03:34.848 LIB libspdk_event_sock.a 00:03:34.848 SO libspdk_event_iobuf.so.3.0 00:03:34.848 LIB libspdk_event_fsdev.a 00:03:34.848 SO libspdk_event_sock.so.5.0 00:03:34.848 SYMLINK libspdk_event_keyring.so 00:03:34.848 SYMLINK libspdk_event_scheduler.so 00:03:34.848 LIB libspdk_event_vmd.a 00:03:35.106 SO libspdk_event_fsdev.so.1.0 00:03:35.106 LIB libspdk_event_vhost_blk.a 00:03:35.106 SO libspdk_event_vmd.so.6.0 00:03:35.106 SYMLINK libspdk_event_iobuf.so 00:03:35.106 SYMLINK libspdk_event_sock.so 00:03:35.106 SO libspdk_event_vhost_blk.so.3.0 00:03:35.106 SYMLINK libspdk_event_fsdev.so 00:03:35.106 SYMLINK libspdk_event_vmd.so 00:03:35.106 SYMLINK libspdk_event_vhost_blk.so 00:03:35.364 CC module/event/subsystems/accel/accel.o 00:03:35.364 LIB libspdk_event_accel.a 00:03:35.622 SO libspdk_event_accel.so.6.0 00:03:35.622 SYMLINK libspdk_event_accel.so 00:03:35.880 CC module/event/subsystems/bdev/bdev.o 00:03:36.139 LIB libspdk_event_bdev.a 00:03:36.139 SO libspdk_event_bdev.so.6.0 00:03:36.139 SYMLINK libspdk_event_bdev.so 00:03:36.397 CC module/event/subsystems/nbd/nbd.o 00:03:36.397 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:36.397 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:36.397 CC module/event/subsystems/ublk/ublk.o 00:03:36.397 CC module/event/subsystems/scsi/scsi.o 00:03:36.656 LIB libspdk_event_nbd.a 00:03:36.656 LIB libspdk_event_ublk.a 00:03:36.656 SO libspdk_event_nbd.so.6.0 00:03:36.656 LIB libspdk_event_scsi.a 00:03:36.656 SO libspdk_event_ublk.so.3.0 00:03:36.656 SO libspdk_event_scsi.so.6.0 00:03:36.656 SYMLINK libspdk_event_nbd.so 00:03:36.656 SYMLINK libspdk_event_ublk.so 00:03:36.656 SYMLINK libspdk_event_scsi.so 00:03:36.656 LIB libspdk_event_nvmf.a 00:03:36.656 SO libspdk_event_nvmf.so.6.0 00:03:36.914 SYMLINK libspdk_event_nvmf.so 00:03:36.914 CC module/event/subsystems/iscsi/iscsi.o 00:03:36.914 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:37.172 LIB libspdk_event_vhost_scsi.a 00:03:37.172 LIB libspdk_event_iscsi.a 00:03:37.172 SO libspdk_event_vhost_scsi.so.3.0 00:03:37.172 SO libspdk_event_iscsi.so.6.0 00:03:37.172 SYMLINK libspdk_event_vhost_scsi.so 00:03:37.172 SYMLINK libspdk_event_iscsi.so 00:03:37.431 SO libspdk.so.6.0 00:03:37.431 SYMLINK libspdk.so 00:03:37.689 CXX app/trace/trace.o 00:03:37.689 CC app/spdk_lspci/spdk_lspci.o 00:03:37.689 CC app/trace_record/trace_record.o 00:03:37.689 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:37.689 CC app/iscsi_tgt/iscsi_tgt.o 00:03:37.689 CC app/nvmf_tgt/nvmf_main.o 00:03:37.689 CC app/spdk_tgt/spdk_tgt.o 00:03:37.689 CC examples/ioat/perf/perf.o 00:03:37.689 CC examples/util/zipf/zipf.o 00:03:37.689 CC test/thread/poller_perf/poller_perf.o 00:03:37.689 LINK spdk_lspci 00:03:37.947 LINK nvmf_tgt 00:03:37.947 LINK interrupt_tgt 00:03:37.947 LINK spdk_trace_record 00:03:37.947 LINK iscsi_tgt 00:03:37.947 LINK zipf 00:03:37.947 LINK poller_perf 00:03:37.947 LINK spdk_tgt 00:03:38.205 LINK ioat_perf 00:03:38.205 CC examples/ioat/verify/verify.o 00:03:38.205 CC app/spdk_nvme_perf/perf.o 00:03:38.205 CC app/spdk_nvme_identify/identify.o 00:03:38.205 CC app/spdk_nvme_discover/discovery_aer.o 00:03:38.205 LINK spdk_trace 00:03:38.463 LINK verify 00:03:38.463 CC examples/thread/thread/thread_ex.o 00:03:38.463 CC test/dma/test_dma/test_dma.o 00:03:38.463 CC app/spdk_top/spdk_top.o 00:03:38.463 CC app/spdk_dd/spdk_dd.o 00:03:38.463 CC test/app/bdev_svc/bdev_svc.o 00:03:38.463 LINK spdk_nvme_discover 00:03:38.722 LINK thread 00:03:38.722 LINK bdev_svc 00:03:38.722 TEST_HEADER include/spdk/accel.h 00:03:38.722 TEST_HEADER include/spdk/accel_module.h 00:03:38.722 TEST_HEADER include/spdk/assert.h 00:03:38.982 TEST_HEADER include/spdk/barrier.h 00:03:38.982 TEST_HEADER include/spdk/base64.h 00:03:38.982 TEST_HEADER include/spdk/bdev.h 00:03:38.982 TEST_HEADER include/spdk/bdev_module.h 00:03:38.982 TEST_HEADER include/spdk/bdev_zone.h 00:03:38.982 TEST_HEADER include/spdk/bit_array.h 00:03:38.982 TEST_HEADER include/spdk/bit_pool.h 00:03:38.982 TEST_HEADER include/spdk/blob_bdev.h 00:03:38.982 CC app/fio/nvme/fio_plugin.o 00:03:38.982 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:38.982 TEST_HEADER include/spdk/blobfs.h 00:03:38.982 TEST_HEADER include/spdk/blob.h 00:03:38.982 TEST_HEADER include/spdk/conf.h 00:03:38.982 TEST_HEADER include/spdk/config.h 00:03:38.982 TEST_HEADER include/spdk/cpuset.h 00:03:38.982 TEST_HEADER include/spdk/crc16.h 00:03:38.982 TEST_HEADER include/spdk/crc32.h 00:03:38.982 TEST_HEADER include/spdk/crc64.h 00:03:38.982 TEST_HEADER include/spdk/dif.h 00:03:38.982 TEST_HEADER include/spdk/dma.h 00:03:38.982 TEST_HEADER include/spdk/endian.h 00:03:38.982 TEST_HEADER include/spdk/env_dpdk.h 00:03:38.982 TEST_HEADER include/spdk/env.h 00:03:38.982 TEST_HEADER include/spdk/event.h 00:03:38.982 TEST_HEADER include/spdk/fd_group.h 00:03:38.982 TEST_HEADER include/spdk/fd.h 00:03:38.982 TEST_HEADER include/spdk/file.h 00:03:38.982 TEST_HEADER include/spdk/fsdev.h 00:03:38.982 TEST_HEADER include/spdk/fsdev_module.h 00:03:38.983 TEST_HEADER include/spdk/ftl.h 00:03:38.983 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:38.983 CC app/fio/bdev/fio_plugin.o 00:03:38.983 TEST_HEADER include/spdk/gpt_spec.h 00:03:38.983 TEST_HEADER include/spdk/hexlify.h 00:03:38.983 TEST_HEADER include/spdk/histogram_data.h 00:03:38.983 LINK test_dma 00:03:38.983 TEST_HEADER include/spdk/idxd.h 00:03:38.983 LINK spdk_dd 00:03:38.983 TEST_HEADER include/spdk/idxd_spec.h 00:03:38.983 TEST_HEADER include/spdk/init.h 00:03:38.983 TEST_HEADER include/spdk/ioat.h 00:03:38.983 TEST_HEADER include/spdk/ioat_spec.h 00:03:38.983 TEST_HEADER include/spdk/iscsi_spec.h 00:03:38.983 TEST_HEADER include/spdk/json.h 00:03:38.983 TEST_HEADER include/spdk/jsonrpc.h 00:03:38.983 TEST_HEADER include/spdk/keyring.h 00:03:38.983 TEST_HEADER include/spdk/keyring_module.h 00:03:38.983 TEST_HEADER include/spdk/likely.h 00:03:38.983 TEST_HEADER include/spdk/log.h 00:03:38.983 TEST_HEADER include/spdk/lvol.h 00:03:38.983 TEST_HEADER include/spdk/md5.h 00:03:38.983 TEST_HEADER include/spdk/memory.h 00:03:38.983 TEST_HEADER include/spdk/mmio.h 00:03:38.983 TEST_HEADER include/spdk/nbd.h 00:03:38.983 TEST_HEADER include/spdk/net.h 00:03:38.983 TEST_HEADER include/spdk/notify.h 00:03:38.983 TEST_HEADER include/spdk/nvme.h 00:03:38.983 TEST_HEADER include/spdk/nvme_intel.h 00:03:38.983 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:38.983 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:38.983 TEST_HEADER include/spdk/nvme_spec.h 00:03:38.983 TEST_HEADER include/spdk/nvme_zns.h 00:03:38.983 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:38.983 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:38.983 TEST_HEADER include/spdk/nvmf.h 00:03:38.983 TEST_HEADER include/spdk/nvmf_spec.h 00:03:38.983 TEST_HEADER include/spdk/nvmf_transport.h 00:03:38.983 TEST_HEADER include/spdk/opal.h 00:03:38.983 TEST_HEADER include/spdk/opal_spec.h 00:03:38.983 TEST_HEADER include/spdk/pci_ids.h 00:03:38.983 TEST_HEADER include/spdk/pipe.h 00:03:38.983 TEST_HEADER include/spdk/queue.h 00:03:38.983 TEST_HEADER include/spdk/reduce.h 00:03:38.983 TEST_HEADER include/spdk/rpc.h 00:03:38.983 TEST_HEADER include/spdk/scheduler.h 00:03:38.983 TEST_HEADER include/spdk/scsi.h 00:03:39.243 TEST_HEADER include/spdk/scsi_spec.h 00:03:39.243 TEST_HEADER include/spdk/sock.h 00:03:39.243 TEST_HEADER include/spdk/stdinc.h 00:03:39.243 TEST_HEADER include/spdk/string.h 00:03:39.243 TEST_HEADER include/spdk/thread.h 00:03:39.243 TEST_HEADER include/spdk/trace.h 00:03:39.243 TEST_HEADER include/spdk/trace_parser.h 00:03:39.243 TEST_HEADER include/spdk/tree.h 00:03:39.243 TEST_HEADER include/spdk/ublk.h 00:03:39.243 TEST_HEADER include/spdk/util.h 00:03:39.243 TEST_HEADER include/spdk/uuid.h 00:03:39.243 TEST_HEADER include/spdk/version.h 00:03:39.243 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:39.243 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:39.243 TEST_HEADER include/spdk/vhost.h 00:03:39.243 TEST_HEADER include/spdk/vmd.h 00:03:39.243 TEST_HEADER include/spdk/xor.h 00:03:39.243 TEST_HEADER include/spdk/zipf.h 00:03:39.243 CXX test/cpp_headers/accel.o 00:03:39.243 CXX test/cpp_headers/accel_module.o 00:03:39.243 LINK spdk_nvme_identify 00:03:39.243 LINK spdk_nvme_perf 00:03:39.243 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:39.243 CC examples/sock/hello_world/hello_sock.o 00:03:39.500 CXX test/cpp_headers/assert.o 00:03:39.500 CXX test/cpp_headers/barrier.o 00:03:39.500 CXX test/cpp_headers/base64.o 00:03:39.500 LINK spdk_nvme 00:03:39.500 LINK hello_sock 00:03:39.500 CXX test/cpp_headers/bdev.o 00:03:39.500 LINK spdk_bdev 00:03:39.500 CC app/vhost/vhost.o 00:03:39.757 CC test/env/mem_callbacks/mem_callbacks.o 00:03:39.757 CXX test/cpp_headers/bdev_module.o 00:03:39.757 LINK nvme_fuzz 00:03:39.758 CXX test/cpp_headers/bdev_zone.o 00:03:39.758 CXX test/cpp_headers/bit_array.o 00:03:39.758 LINK spdk_top 00:03:39.758 LINK vhost 00:03:40.016 CC examples/vmd/lsvmd/lsvmd.o 00:03:40.016 CXX test/cpp_headers/bit_pool.o 00:03:40.016 CC examples/idxd/perf/perf.o 00:03:40.274 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:40.274 CC examples/accel/perf/accel_perf.o 00:03:40.274 LINK lsvmd 00:03:40.274 CC examples/blob/hello_world/hello_blob.o 00:03:40.274 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:40.274 LINK mem_callbacks 00:03:40.274 CXX test/cpp_headers/blob_bdev.o 00:03:40.274 CC test/event/event_perf/event_perf.o 00:03:40.274 CC examples/nvme/hello_world/hello_world.o 00:03:40.274 LINK idxd_perf 00:03:40.531 LINK hello_fsdev 00:03:40.531 LINK event_perf 00:03:40.531 LINK hello_blob 00:03:40.531 CC examples/vmd/led/led.o 00:03:40.789 CC test/env/vtophys/vtophys.o 00:03:40.789 CC test/event/reactor/reactor.o 00:03:40.789 CXX test/cpp_headers/blobfs_bdev.o 00:03:40.789 LINK led 00:03:40.789 LINK hello_world 00:03:40.789 LINK accel_perf 00:03:40.789 CC test/event/reactor_perf/reactor_perf.o 00:03:40.789 LINK vtophys 00:03:40.789 LINK reactor 00:03:40.789 CC examples/blob/cli/blobcli.o 00:03:41.046 CXX test/cpp_headers/blobfs.o 00:03:41.046 CC test/app/histogram_perf/histogram_perf.o 00:03:41.046 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:41.046 CC examples/nvme/reconnect/reconnect.o 00:03:41.046 LINK reactor_perf 00:03:41.046 CC test/event/app_repeat/app_repeat.o 00:03:41.305 CXX test/cpp_headers/blob.o 00:03:41.305 CXX test/cpp_headers/conf.o 00:03:41.305 LINK env_dpdk_post_init 00:03:41.305 LINK histogram_perf 00:03:41.305 LINK app_repeat 00:03:41.305 CXX test/cpp_headers/config.o 00:03:41.305 CXX test/cpp_headers/cpuset.o 00:03:41.564 LINK blobcli 00:03:41.564 LINK reconnect 00:03:41.564 CC test/app/jsoncat/jsoncat.o 00:03:41.564 CC examples/bdev/hello_world/hello_bdev.o 00:03:41.564 CC test/env/memory/memory_ut.o 00:03:41.564 CXX test/cpp_headers/crc16.o 00:03:41.564 CC test/app/stub/stub.o 00:03:41.564 CC test/event/scheduler/scheduler.o 00:03:41.821 CXX test/cpp_headers/crc32.o 00:03:41.821 LINK jsoncat 00:03:41.821 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:41.821 CC examples/bdev/bdevperf/bdevperf.o 00:03:41.821 CXX test/cpp_headers/crc64.o 00:03:41.821 LINK stub 00:03:41.821 CXX test/cpp_headers/dif.o 00:03:41.821 LINK hello_bdev 00:03:42.079 LINK scheduler 00:03:42.079 LINK iscsi_fuzz 00:03:42.079 CC test/nvme/aer/aer.o 00:03:42.079 CXX test/cpp_headers/dma.o 00:03:42.079 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:42.079 CC test/nvme/reset/reset.o 00:03:42.338 CC test/env/pci/pci_ut.o 00:03:42.338 CXX test/cpp_headers/endian.o 00:03:42.338 LINK nvme_manage 00:03:42.338 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:42.338 LINK aer 00:03:42.338 CC test/nvme/sgl/sgl.o 00:03:42.338 CC test/nvme/e2edp/nvme_dp.o 00:03:42.338 LINK reset 00:03:42.634 CXX test/cpp_headers/env_dpdk.o 00:03:42.634 LINK pci_ut 00:03:42.634 CC test/nvme/overhead/overhead.o 00:03:42.634 LINK sgl 00:03:42.634 CC examples/nvme/arbitration/arbitration.o 00:03:42.634 CXX test/cpp_headers/env.o 00:03:42.634 LINK nvme_dp 00:03:42.634 LINK bdevperf 00:03:42.634 CC test/nvme/err_injection/err_injection.o 00:03:42.918 LINK vhost_fuzz 00:03:42.918 CXX test/cpp_headers/event.o 00:03:42.918 LINK memory_ut 00:03:42.918 LINK overhead 00:03:42.918 CC test/nvme/startup/startup.o 00:03:42.918 CC test/nvme/reserve/reserve.o 00:03:42.918 LINK err_injection 00:03:42.918 LINK arbitration 00:03:42.918 CC test/nvme/simple_copy/simple_copy.o 00:03:43.176 CXX test/cpp_headers/fd_group.o 00:03:43.176 CC test/rpc_client/rpc_client_test.o 00:03:43.176 CXX test/cpp_headers/fd.o 00:03:43.176 CC test/accel/dif/dif.o 00:03:43.176 LINK startup 00:03:43.176 CXX test/cpp_headers/file.o 00:03:43.176 LINK reserve 00:03:43.176 LINK rpc_client_test 00:03:43.176 LINK simple_copy 00:03:43.176 CC examples/nvme/hotplug/hotplug.o 00:03:43.434 CXX test/cpp_headers/fsdev.o 00:03:43.434 CXX test/cpp_headers/fsdev_module.o 00:03:43.434 CC test/nvme/connect_stress/connect_stress.o 00:03:43.434 CC test/blobfs/mkfs/mkfs.o 00:03:43.434 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:43.434 LINK hotplug 00:03:43.692 CC examples/nvme/abort/abort.o 00:03:43.692 CC test/lvol/esnap/esnap.o 00:03:43.692 CC test/nvme/boot_partition/boot_partition.o 00:03:43.692 CXX test/cpp_headers/ftl.o 00:03:43.692 LINK mkfs 00:03:43.692 CC test/nvme/compliance/nvme_compliance.o 00:03:43.692 LINK connect_stress 00:03:43.950 LINK boot_partition 00:03:43.950 LINK cmb_copy 00:03:43.950 CC test/nvme/fused_ordering/fused_ordering.o 00:03:43.950 LINK dif 00:03:43.950 CXX test/cpp_headers/fuse_dispatcher.o 00:03:43.950 CXX test/cpp_headers/gpt_spec.o 00:03:43.950 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:44.207 LINK fused_ordering 00:03:44.207 LINK nvme_compliance 00:03:44.207 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:44.207 LINK abort 00:03:44.207 CXX test/cpp_headers/hexlify.o 00:03:44.207 CC test/nvme/fdp/fdp.o 00:03:44.207 CC test/nvme/cuse/cuse.o 00:03:44.207 LINK doorbell_aers 00:03:44.207 CXX test/cpp_headers/histogram_data.o 00:03:44.207 CC test/bdev/bdevio/bdevio.o 00:03:44.466 CXX test/cpp_headers/idxd.o 00:03:44.466 LINK pmr_persistence 00:03:44.466 CXX test/cpp_headers/idxd_spec.o 00:03:44.466 CXX test/cpp_headers/init.o 00:03:44.466 CXX test/cpp_headers/ioat.o 00:03:44.466 CXX test/cpp_headers/ioat_spec.o 00:03:44.466 LINK fdp 00:03:44.466 CXX test/cpp_headers/iscsi_spec.o 00:03:44.724 CXX test/cpp_headers/json.o 00:03:44.724 CXX test/cpp_headers/jsonrpc.o 00:03:44.724 CXX test/cpp_headers/keyring.o 00:03:44.724 CXX test/cpp_headers/keyring_module.o 00:03:44.724 CXX test/cpp_headers/likely.o 00:03:44.724 CXX test/cpp_headers/log.o 00:03:44.724 LINK bdevio 00:03:44.724 CXX test/cpp_headers/lvol.o 00:03:44.724 CXX test/cpp_headers/md5.o 00:03:44.724 CXX test/cpp_headers/memory.o 00:03:44.982 CXX test/cpp_headers/mmio.o 00:03:44.982 CXX test/cpp_headers/nbd.o 00:03:44.982 CC examples/nvmf/nvmf/nvmf.o 00:03:44.982 CXX test/cpp_headers/net.o 00:03:44.982 CXX test/cpp_headers/notify.o 00:03:44.982 CXX test/cpp_headers/nvme.o 00:03:44.982 CXX test/cpp_headers/nvme_intel.o 00:03:44.982 CXX test/cpp_headers/nvme_ocssd.o 00:03:44.982 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:44.982 CXX test/cpp_headers/nvme_spec.o 00:03:45.241 CXX test/cpp_headers/nvme_zns.o 00:03:45.241 CXX test/cpp_headers/nvmf_cmd.o 00:03:45.241 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:45.241 CXX test/cpp_headers/nvmf.o 00:03:45.241 LINK nvmf 00:03:45.241 CXX test/cpp_headers/nvmf_spec.o 00:03:45.241 CXX test/cpp_headers/nvmf_transport.o 00:03:45.241 CXX test/cpp_headers/opal.o 00:03:45.499 CXX test/cpp_headers/opal_spec.o 00:03:45.499 CXX test/cpp_headers/pci_ids.o 00:03:45.499 CXX test/cpp_headers/pipe.o 00:03:45.499 CXX test/cpp_headers/queue.o 00:03:45.499 CXX test/cpp_headers/reduce.o 00:03:45.499 CXX test/cpp_headers/rpc.o 00:03:45.499 CXX test/cpp_headers/scheduler.o 00:03:45.499 CXX test/cpp_headers/scsi.o 00:03:45.499 CXX test/cpp_headers/scsi_spec.o 00:03:45.499 CXX test/cpp_headers/sock.o 00:03:45.499 CXX test/cpp_headers/stdinc.o 00:03:45.499 CXX test/cpp_headers/string.o 00:03:45.758 CXX test/cpp_headers/thread.o 00:03:45.758 CXX test/cpp_headers/trace.o 00:03:45.758 CXX test/cpp_headers/trace_parser.o 00:03:45.758 CXX test/cpp_headers/tree.o 00:03:45.758 CXX test/cpp_headers/ublk.o 00:03:45.758 LINK cuse 00:03:45.758 CXX test/cpp_headers/util.o 00:03:45.758 CXX test/cpp_headers/uuid.o 00:03:45.758 CXX test/cpp_headers/version.o 00:03:45.758 CXX test/cpp_headers/vfio_user_pci.o 00:03:45.758 CXX test/cpp_headers/vfio_user_spec.o 00:03:45.758 CXX test/cpp_headers/vhost.o 00:03:45.758 CXX test/cpp_headers/vmd.o 00:03:45.758 CXX test/cpp_headers/xor.o 00:03:46.016 CXX test/cpp_headers/zipf.o 00:03:49.301 LINK esnap 00:03:49.301 00:03:49.301 real 1m42.795s 00:03:49.301 user 9m24.769s 00:03:49.301 sys 1m56.393s 00:03:49.301 08:14:50 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:49.301 08:14:50 make -- common/autotest_common.sh@10 -- $ set +x 00:03:49.301 ************************************ 00:03:49.301 END TEST make 00:03:49.301 ************************************ 00:03:49.301 08:14:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:49.301 08:14:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:49.301 08:14:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:49.301 08:14:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.301 08:14:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:49.301 08:14:51 -- pm/common@44 -- $ pid=5238 00:03:49.301 08:14:51 -- pm/common@50 -- $ kill -TERM 5238 00:03:49.301 08:14:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.301 08:14:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:49.301 08:14:51 -- pm/common@44 -- $ pid=5240 00:03:49.301 08:14:51 -- pm/common@50 -- $ kill -TERM 5240 00:03:49.559 08:14:51 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:49.559 08:14:51 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:49.559 08:14:51 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:49.559 08:14:51 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:49.559 08:14:51 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.559 08:14:51 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.559 08:14:51 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.559 08:14:51 -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.559 08:14:51 -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.559 08:14:51 -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.559 08:14:51 -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.559 08:14:51 -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.559 08:14:51 -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.559 08:14:51 -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.559 08:14:51 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.559 08:14:51 -- scripts/common.sh@344 -- # case "$op" in 00:03:49.559 08:14:51 -- scripts/common.sh@345 -- # : 1 00:03:49.559 08:14:51 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.559 08:14:51 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.559 08:14:51 -- scripts/common.sh@365 -- # decimal 1 00:03:49.559 08:14:51 -- scripts/common.sh@353 -- # local d=1 00:03:49.559 08:14:51 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.559 08:14:51 -- scripts/common.sh@355 -- # echo 1 00:03:49.559 08:14:51 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.559 08:14:51 -- scripts/common.sh@366 -- # decimal 2 00:03:49.559 08:14:51 -- scripts/common.sh@353 -- # local d=2 00:03:49.559 08:14:51 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.559 08:14:51 -- scripts/common.sh@355 -- # echo 2 00:03:49.559 08:14:51 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.559 08:14:51 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.559 08:14:51 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.559 08:14:51 -- scripts/common.sh@368 -- # return 0 00:03:49.559 08:14:51 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.559 08:14:51 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:49.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.559 --rc genhtml_branch_coverage=1 00:03:49.559 --rc genhtml_function_coverage=1 00:03:49.559 --rc genhtml_legend=1 00:03:49.559 --rc geninfo_all_blocks=1 00:03:49.559 --rc geninfo_unexecuted_blocks=1 00:03:49.559 00:03:49.559 ' 00:03:49.559 08:14:51 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:49.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.559 --rc genhtml_branch_coverage=1 00:03:49.559 --rc genhtml_function_coverage=1 00:03:49.559 --rc genhtml_legend=1 00:03:49.559 --rc geninfo_all_blocks=1 00:03:49.559 --rc geninfo_unexecuted_blocks=1 00:03:49.559 00:03:49.559 ' 00:03:49.559 08:14:51 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:49.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.559 --rc genhtml_branch_coverage=1 00:03:49.559 --rc genhtml_function_coverage=1 00:03:49.559 --rc genhtml_legend=1 00:03:49.559 --rc geninfo_all_blocks=1 00:03:49.559 --rc geninfo_unexecuted_blocks=1 00:03:49.559 00:03:49.559 ' 00:03:49.559 08:14:51 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:49.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.559 --rc genhtml_branch_coverage=1 00:03:49.559 --rc genhtml_function_coverage=1 00:03:49.559 --rc genhtml_legend=1 00:03:49.559 --rc geninfo_all_blocks=1 00:03:49.559 --rc geninfo_unexecuted_blocks=1 00:03:49.559 00:03:49.559 ' 00:03:49.559 08:14:51 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:49.559 08:14:51 -- nvmf/common.sh@7 -- # uname -s 00:03:49.559 08:14:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:49.559 08:14:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:49.559 08:14:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:49.559 08:14:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:49.559 08:14:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:49.559 08:14:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:49.559 08:14:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:49.559 08:14:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:49.559 08:14:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:49.559 08:14:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:49.559 08:14:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:03:49.559 08:14:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:03:49.559 08:14:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:49.559 08:14:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:49.559 08:14:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:49.559 08:14:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:49.559 08:14:51 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:49.559 08:14:51 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:49.559 08:14:51 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:49.559 08:14:51 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:49.559 08:14:51 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:49.559 08:14:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.559 08:14:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.559 08:14:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.559 08:14:51 -- paths/export.sh@5 -- # export PATH 00:03:49.559 08:14:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.559 08:14:51 -- nvmf/common.sh@51 -- # : 0 00:03:49.559 08:14:51 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:49.559 08:14:51 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:49.559 08:14:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:49.559 08:14:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:49.559 08:14:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:49.559 08:14:51 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:49.559 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:49.559 08:14:51 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:49.559 08:14:51 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:49.559 08:14:51 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:49.559 08:14:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:49.559 08:14:51 -- spdk/autotest.sh@32 -- # uname -s 00:03:49.559 08:14:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:49.559 08:14:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:49.559 08:14:51 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:49.559 08:14:51 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:49.559 08:14:51 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:49.559 08:14:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:49.819 08:14:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:49.819 08:14:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:49.819 08:14:51 -- spdk/autotest.sh@48 -- # udevadm_pid=54485 00:03:49.819 08:14:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:49.819 08:14:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:49.819 08:14:51 -- pm/common@17 -- # local monitor 00:03:49.819 08:14:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.819 08:14:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.819 08:14:51 -- pm/common@25 -- # sleep 1 00:03:49.819 08:14:51 -- pm/common@21 -- # date +%s 00:03:49.819 08:14:51 -- pm/common@21 -- # date +%s 00:03:49.819 08:14:51 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728980091 00:03:49.819 08:14:51 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728980091 00:03:49.819 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728980091_collect-vmstat.pm.log 00:03:49.819 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728980091_collect-cpu-load.pm.log 00:03:50.762 08:14:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:50.762 08:14:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:50.762 08:14:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:50.762 08:14:52 -- common/autotest_common.sh@10 -- # set +x 00:03:50.762 08:14:52 -- spdk/autotest.sh@59 -- # create_test_list 00:03:50.762 08:14:52 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:50.762 08:14:52 -- common/autotest_common.sh@10 -- # set +x 00:03:50.762 08:14:52 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:50.762 08:14:52 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:50.762 08:14:52 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:50.762 08:14:52 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:50.762 08:14:52 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:50.762 08:14:52 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:50.762 08:14:52 -- common/autotest_common.sh@1455 -- # uname 00:03:50.762 08:14:52 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:50.762 08:14:52 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:50.762 08:14:52 -- common/autotest_common.sh@1475 -- # uname 00:03:50.762 08:14:52 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:50.762 08:14:52 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:50.762 08:14:52 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:50.762 lcov: LCOV version 1.15 00:03:50.762 08:14:52 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:08.848 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:08.848 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:26.937 08:15:28 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:26.937 08:15:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.937 08:15:28 -- common/autotest_common.sh@10 -- # set +x 00:04:26.937 08:15:28 -- spdk/autotest.sh@78 -- # rm -f 00:04:26.937 08:15:28 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:27.195 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.454 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:27.454 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:27.454 08:15:28 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:27.454 08:15:28 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:27.454 08:15:28 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:27.454 08:15:28 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:27.454 08:15:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:27.454 08:15:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:27.454 08:15:28 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:27.454 08:15:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:27.454 08:15:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:27.454 08:15:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:27.454 08:15:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:04:27.454 08:15:28 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:04:27.454 08:15:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:27.454 08:15:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:27.454 08:15:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:27.454 08:15:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:04:27.454 08:15:28 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:04:27.454 08:15:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:27.454 08:15:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:27.454 08:15:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:27.454 08:15:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:27.454 08:15:28 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:27.454 08:15:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:27.454 08:15:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:27.454 08:15:28 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:27.454 08:15:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:27.454 08:15:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:27.454 08:15:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:27.454 08:15:28 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:27.454 08:15:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:27.454 No valid GPT data, bailing 00:04:27.454 08:15:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:27.454 08:15:29 -- scripts/common.sh@394 -- # pt= 00:04:27.454 08:15:29 -- scripts/common.sh@395 -- # return 1 00:04:27.454 08:15:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:27.454 1+0 records in 00:04:27.454 1+0 records out 00:04:27.454 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00804972 s, 130 MB/s 00:04:27.454 08:15:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:27.454 08:15:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:27.454 08:15:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:04:27.454 08:15:29 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:04:27.454 08:15:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:04:27.454 No valid GPT data, bailing 00:04:27.454 08:15:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:27.454 08:15:29 -- scripts/common.sh@394 -- # pt= 00:04:27.454 08:15:29 -- scripts/common.sh@395 -- # return 1 00:04:27.454 08:15:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:04:27.454 1+0 records in 00:04:27.454 1+0 records out 00:04:27.454 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00539912 s, 194 MB/s 00:04:27.454 08:15:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:27.454 08:15:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:27.454 08:15:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:04:27.454 08:15:29 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:04:27.454 08:15:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:04:27.713 No valid GPT data, bailing 00:04:27.713 08:15:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:27.713 08:15:29 -- scripts/common.sh@394 -- # pt= 00:04:27.713 08:15:29 -- scripts/common.sh@395 -- # return 1 00:04:27.713 08:15:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:04:27.713 1+0 records in 00:04:27.713 1+0 records out 00:04:27.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00524971 s, 200 MB/s 00:04:27.713 08:15:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:27.713 08:15:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:27.713 08:15:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:27.713 08:15:29 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:27.713 08:15:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:27.713 No valid GPT data, bailing 00:04:27.713 08:15:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:27.713 08:15:29 -- scripts/common.sh@394 -- # pt= 00:04:27.713 08:15:29 -- scripts/common.sh@395 -- # return 1 00:04:27.713 08:15:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:27.713 1+0 records in 00:04:27.713 1+0 records out 00:04:27.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00530119 s, 198 MB/s 00:04:27.713 08:15:29 -- spdk/autotest.sh@105 -- # sync 00:04:27.713 08:15:29 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:27.713 08:15:29 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:27.713 08:15:29 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:30.245 08:15:31 -- spdk/autotest.sh@111 -- # uname -s 00:04:30.245 08:15:31 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:30.245 08:15:31 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:30.245 08:15:31 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:30.503 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.503 Hugepages 00:04:30.503 node hugesize free / total 00:04:30.503 node0 1048576kB 0 / 0 00:04:30.503 node0 2048kB 0 / 0 00:04:30.503 00:04:30.503 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:30.762 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:30.762 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:30.762 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:30.762 08:15:32 -- spdk/autotest.sh@117 -- # uname -s 00:04:30.762 08:15:32 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:30.762 08:15:32 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:30.762 08:15:32 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:31.695 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:31.695 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:31.695 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:31.695 08:15:33 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:32.630 08:15:34 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:32.630 08:15:34 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:32.630 08:15:34 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:32.630 08:15:34 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:32.630 08:15:34 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:32.630 08:15:34 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:32.630 08:15:34 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:32.630 08:15:34 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:32.630 08:15:34 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:32.888 08:15:34 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:32.888 08:15:34 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:32.888 08:15:34 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:33.147 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:33.147 Waiting for block devices as requested 00:04:33.147 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:33.407 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:33.407 08:15:34 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:33.407 08:15:35 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:33.407 08:15:35 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:33.407 08:15:35 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:33.407 08:15:35 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:33.407 08:15:35 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:33.407 08:15:35 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:33.407 08:15:35 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:33.407 08:15:35 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:33.407 08:15:35 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:33.407 08:15:35 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:33.407 08:15:35 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:33.407 08:15:35 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:33.407 08:15:35 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:33.407 08:15:35 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:33.407 08:15:35 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:33.407 08:15:35 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:33.407 08:15:35 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:33.407 08:15:35 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:33.407 08:15:35 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:33.407 08:15:35 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:33.407 08:15:35 -- common/autotest_common.sh@1541 -- # continue 00:04:33.407 08:15:35 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:33.407 08:15:35 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:33.407 08:15:35 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:33.407 08:15:35 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:33.407 08:15:35 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:33.407 08:15:35 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:33.407 08:15:35 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:33.407 08:15:35 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:33.407 08:15:35 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:33.407 08:15:35 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:33.407 08:15:35 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:33.407 08:15:35 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:33.407 08:15:35 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:33.407 08:15:35 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:33.407 08:15:35 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:33.407 08:15:35 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:33.407 08:15:35 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:33.407 08:15:35 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:33.407 08:15:35 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:33.407 08:15:35 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:33.407 08:15:35 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:33.407 08:15:35 -- common/autotest_common.sh@1541 -- # continue 00:04:33.407 08:15:35 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:33.407 08:15:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:33.407 08:15:35 -- common/autotest_common.sh@10 -- # set +x 00:04:33.407 08:15:35 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:33.407 08:15:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:33.407 08:15:35 -- common/autotest_common.sh@10 -- # set +x 00:04:33.407 08:15:35 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:34.343 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:34.343 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:34.343 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:34.343 08:15:36 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:34.343 08:15:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:34.343 08:15:36 -- common/autotest_common.sh@10 -- # set +x 00:04:34.602 08:15:36 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:34.602 08:15:36 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:34.602 08:15:36 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:34.602 08:15:36 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:34.602 08:15:36 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:34.602 08:15:36 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:34.602 08:15:36 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:34.602 08:15:36 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:34.602 08:15:36 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:34.602 08:15:36 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:34.602 08:15:36 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:34.602 08:15:36 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:34.602 08:15:36 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:34.602 08:15:36 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:34.602 08:15:36 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:34.602 08:15:36 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:34.602 08:15:36 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:34.602 08:15:36 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:34.602 08:15:36 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:34.602 08:15:36 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:34.602 08:15:36 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:34.602 08:15:36 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:34.602 08:15:36 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:34.602 08:15:36 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:34.602 08:15:36 -- common/autotest_common.sh@1570 -- # return 0 00:04:34.602 08:15:36 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:34.602 08:15:36 -- common/autotest_common.sh@1578 -- # return 0 00:04:34.602 08:15:36 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:34.602 08:15:36 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:34.602 08:15:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:34.602 08:15:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:34.602 08:15:36 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:34.602 08:15:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:34.602 08:15:36 -- common/autotest_common.sh@10 -- # set +x 00:04:34.602 08:15:36 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:34.602 08:15:36 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:34.602 08:15:36 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:34.602 08:15:36 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:34.602 08:15:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.602 08:15:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.602 08:15:36 -- common/autotest_common.sh@10 -- # set +x 00:04:34.602 ************************************ 00:04:34.603 START TEST env 00:04:34.603 ************************************ 00:04:34.603 08:15:36 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:34.603 * Looking for test storage... 00:04:34.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:34.603 08:15:36 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:34.603 08:15:36 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:34.603 08:15:36 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:34.862 08:15:36 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:34.862 08:15:36 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.862 08:15:36 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.862 08:15:36 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.862 08:15:36 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.862 08:15:36 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.862 08:15:36 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.862 08:15:36 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.862 08:15:36 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.862 08:15:36 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.862 08:15:36 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.862 08:15:36 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.862 08:15:36 env -- scripts/common.sh@344 -- # case "$op" in 00:04:34.862 08:15:36 env -- scripts/common.sh@345 -- # : 1 00:04:34.862 08:15:36 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.862 08:15:36 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.862 08:15:36 env -- scripts/common.sh@365 -- # decimal 1 00:04:34.862 08:15:36 env -- scripts/common.sh@353 -- # local d=1 00:04:34.862 08:15:36 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.862 08:15:36 env -- scripts/common.sh@355 -- # echo 1 00:04:34.862 08:15:36 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.862 08:15:36 env -- scripts/common.sh@366 -- # decimal 2 00:04:34.862 08:15:36 env -- scripts/common.sh@353 -- # local d=2 00:04:34.862 08:15:36 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.862 08:15:36 env -- scripts/common.sh@355 -- # echo 2 00:04:34.862 08:15:36 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.862 08:15:36 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.862 08:15:36 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.862 08:15:36 env -- scripts/common.sh@368 -- # return 0 00:04:34.862 08:15:36 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.862 08:15:36 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:34.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.862 --rc genhtml_branch_coverage=1 00:04:34.862 --rc genhtml_function_coverage=1 00:04:34.862 --rc genhtml_legend=1 00:04:34.862 --rc geninfo_all_blocks=1 00:04:34.862 --rc geninfo_unexecuted_blocks=1 00:04:34.862 00:04:34.862 ' 00:04:34.862 08:15:36 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:34.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.862 --rc genhtml_branch_coverage=1 00:04:34.862 --rc genhtml_function_coverage=1 00:04:34.862 --rc genhtml_legend=1 00:04:34.862 --rc geninfo_all_blocks=1 00:04:34.862 --rc geninfo_unexecuted_blocks=1 00:04:34.862 00:04:34.862 ' 00:04:34.862 08:15:36 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:34.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.862 --rc genhtml_branch_coverage=1 00:04:34.862 --rc genhtml_function_coverage=1 00:04:34.862 --rc genhtml_legend=1 00:04:34.862 --rc geninfo_all_blocks=1 00:04:34.862 --rc geninfo_unexecuted_blocks=1 00:04:34.862 00:04:34.862 ' 00:04:34.862 08:15:36 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:34.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.862 --rc genhtml_branch_coverage=1 00:04:34.862 --rc genhtml_function_coverage=1 00:04:34.862 --rc genhtml_legend=1 00:04:34.862 --rc geninfo_all_blocks=1 00:04:34.862 --rc geninfo_unexecuted_blocks=1 00:04:34.862 00:04:34.862 ' 00:04:34.862 08:15:36 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:34.862 08:15:36 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.862 08:15:36 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.862 08:15:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.862 ************************************ 00:04:34.862 START TEST env_memory 00:04:34.862 ************************************ 00:04:34.862 08:15:36 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:34.862 00:04:34.862 00:04:34.862 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.862 http://cunit.sourceforge.net/ 00:04:34.862 00:04:34.862 00:04:34.862 Suite: memory 00:04:34.862 Test: alloc and free memory map ...[2024-10-15 08:15:36.429630] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:34.862 passed 00:04:34.862 Test: mem map translation ...[2024-10-15 08:15:36.460947] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:34.862 [2024-10-15 08:15:36.461045] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:34.862 [2024-10-15 08:15:36.461142] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:34.862 [2024-10-15 08:15:36.461170] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:34.862 passed 00:04:34.862 Test: mem map registration ...[2024-10-15 08:15:36.525375] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:34.862 [2024-10-15 08:15:36.525463] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:34.862 passed 00:04:35.122 Test: mem map adjacent registrations ...passed 00:04:35.122 00:04:35.122 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.122 suites 1 1 n/a 0 0 00:04:35.122 tests 4 4 4 0 0 00:04:35.122 asserts 152 152 152 0 n/a 00:04:35.122 00:04:35.122 Elapsed time = 0.215 seconds 00:04:35.122 00:04:35.122 real 0m0.233s 00:04:35.122 user 0m0.215s 00:04:35.122 sys 0m0.015s 00:04:35.122 08:15:36 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.122 08:15:36 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:35.122 ************************************ 00:04:35.122 END TEST env_memory 00:04:35.122 ************************************ 00:04:35.122 08:15:36 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:35.122 08:15:36 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.122 08:15:36 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.122 08:15:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.122 ************************************ 00:04:35.122 START TEST env_vtophys 00:04:35.122 ************************************ 00:04:35.122 08:15:36 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:35.122 EAL: lib.eal log level changed from notice to debug 00:04:35.122 EAL: Detected lcore 0 as core 0 on socket 0 00:04:35.122 EAL: Detected lcore 1 as core 0 on socket 0 00:04:35.122 EAL: Detected lcore 2 as core 0 on socket 0 00:04:35.122 EAL: Detected lcore 3 as core 0 on socket 0 00:04:35.122 EAL: Detected lcore 4 as core 0 on socket 0 00:04:35.122 EAL: Detected lcore 5 as core 0 on socket 0 00:04:35.122 EAL: Detected lcore 6 as core 0 on socket 0 00:04:35.122 EAL: Detected lcore 7 as core 0 on socket 0 00:04:35.122 EAL: Detected lcore 8 as core 0 on socket 0 00:04:35.122 EAL: Detected lcore 9 as core 0 on socket 0 00:04:35.122 EAL: Maximum logical cores by configuration: 128 00:04:35.122 EAL: Detected CPU lcores: 10 00:04:35.122 EAL: Detected NUMA nodes: 1 00:04:35.122 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:35.122 EAL: Detected shared linkage of DPDK 00:04:35.122 EAL: No shared files mode enabled, IPC will be disabled 00:04:35.122 EAL: Selected IOVA mode 'PA' 00:04:35.122 EAL: Probing VFIO support... 00:04:35.122 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:35.122 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:35.122 EAL: Ask a virtual area of 0x2e000 bytes 00:04:35.122 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:35.122 EAL: Setting up physically contiguous memory... 00:04:35.122 EAL: Setting maximum number of open files to 524288 00:04:35.122 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:35.122 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:35.122 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.122 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:35.122 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.122 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.122 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:35.122 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:35.122 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.122 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:35.122 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.122 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.122 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:35.122 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:35.122 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.122 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:35.122 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.122 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.122 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:35.122 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:35.122 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.122 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:35.122 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.122 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.122 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:35.122 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:35.122 EAL: Hugepages will be freed exactly as allocated. 00:04:35.122 EAL: No shared files mode enabled, IPC is disabled 00:04:35.122 EAL: No shared files mode enabled, IPC is disabled 00:04:35.122 EAL: TSC frequency is ~2200000 KHz 00:04:35.122 EAL: Main lcore 0 is ready (tid=7f5d18e85a00;cpuset=[0]) 00:04:35.122 EAL: Trying to obtain current memory policy. 00:04:35.122 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.122 EAL: Restoring previous memory policy: 0 00:04:35.122 EAL: request: mp_malloc_sync 00:04:35.122 EAL: No shared files mode enabled, IPC is disabled 00:04:35.122 EAL: Heap on socket 0 was expanded by 2MB 00:04:35.122 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:35.122 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:35.122 EAL: Mem event callback 'spdk:(nil)' registered 00:04:35.122 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:35.122 00:04:35.122 00:04:35.122 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.122 http://cunit.sourceforge.net/ 00:04:35.122 00:04:35.122 00:04:35.122 Suite: components_suite 00:04:35.122 Test: vtophys_malloc_test ...passed 00:04:35.122 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:35.122 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.122 EAL: Restoring previous memory policy: 4 00:04:35.122 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.122 EAL: request: mp_malloc_sync 00:04:35.122 EAL: No shared files mode enabled, IPC is disabled 00:04:35.122 EAL: Heap on socket 0 was expanded by 4MB 00:04:35.122 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.122 EAL: request: mp_malloc_sync 00:04:35.122 EAL: No shared files mode enabled, IPC is disabled 00:04:35.122 EAL: Heap on socket 0 was shrunk by 4MB 00:04:35.122 EAL: Trying to obtain current memory policy. 00:04:35.122 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.122 EAL: Restoring previous memory policy: 4 00:04:35.122 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.122 EAL: request: mp_malloc_sync 00:04:35.122 EAL: No shared files mode enabled, IPC is disabled 00:04:35.122 EAL: Heap on socket 0 was expanded by 6MB 00:04:35.122 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.122 EAL: request: mp_malloc_sync 00:04:35.122 EAL: No shared files mode enabled, IPC is disabled 00:04:35.122 EAL: Heap on socket 0 was shrunk by 6MB 00:04:35.122 EAL: Trying to obtain current memory policy. 00:04:35.122 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.122 EAL: Restoring previous memory policy: 4 00:04:35.122 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.122 EAL: request: mp_malloc_sync 00:04:35.122 EAL: No shared files mode enabled, IPC is disabled 00:04:35.122 EAL: Heap on socket 0 was expanded by 10MB 00:04:35.122 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.122 EAL: request: mp_malloc_sync 00:04:35.122 EAL: No shared files mode enabled, IPC is disabled 00:04:35.122 EAL: Heap on socket 0 was shrunk by 10MB 00:04:35.122 EAL: Trying to obtain current memory policy. 00:04:35.122 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.122 EAL: Restoring previous memory policy: 4 00:04:35.122 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.122 EAL: request: mp_malloc_sync 00:04:35.122 EAL: No shared files mode enabled, IPC is disabled 00:04:35.122 EAL: Heap on socket 0 was expanded by 18MB 00:04:35.122 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.122 EAL: request: mp_malloc_sync 00:04:35.122 EAL: No shared files mode enabled, IPC is disabled 00:04:35.122 EAL: Heap on socket 0 was shrunk by 18MB 00:04:35.122 EAL: Trying to obtain current memory policy. 00:04:35.122 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.122 EAL: Restoring previous memory policy: 4 00:04:35.122 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.122 EAL: request: mp_malloc_sync 00:04:35.122 EAL: No shared files mode enabled, IPC is disabled 00:04:35.122 EAL: Heap on socket 0 was expanded by 34MB 00:04:35.381 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.381 EAL: request: mp_malloc_sync 00:04:35.381 EAL: No shared files mode enabled, IPC is disabled 00:04:35.381 EAL: Heap on socket 0 was shrunk by 34MB 00:04:35.381 EAL: Trying to obtain current memory policy. 00:04:35.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.381 EAL: Restoring previous memory policy: 4 00:04:35.381 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.381 EAL: request: mp_malloc_sync 00:04:35.381 EAL: No shared files mode enabled, IPC is disabled 00:04:35.381 EAL: Heap on socket 0 was expanded by 66MB 00:04:35.381 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.381 EAL: request: mp_malloc_sync 00:04:35.381 EAL: No shared files mode enabled, IPC is disabled 00:04:35.381 EAL: Heap on socket 0 was shrunk by 66MB 00:04:35.381 EAL: Trying to obtain current memory policy. 00:04:35.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.381 EAL: Restoring previous memory policy: 4 00:04:35.381 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.381 EAL: request: mp_malloc_sync 00:04:35.381 EAL: No shared files mode enabled, IPC is disabled 00:04:35.381 EAL: Heap on socket 0 was expanded by 130MB 00:04:35.381 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.381 EAL: request: mp_malloc_sync 00:04:35.381 EAL: No shared files mode enabled, IPC is disabled 00:04:35.381 EAL: Heap on socket 0 was shrunk by 130MB 00:04:35.381 EAL: Trying to obtain current memory policy. 00:04:35.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.381 EAL: Restoring previous memory policy: 4 00:04:35.381 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.381 EAL: request: mp_malloc_sync 00:04:35.381 EAL: No shared files mode enabled, IPC is disabled 00:04:35.381 EAL: Heap on socket 0 was expanded by 258MB 00:04:35.640 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.640 EAL: request: mp_malloc_sync 00:04:35.640 EAL: No shared files mode enabled, IPC is disabled 00:04:35.640 EAL: Heap on socket 0 was shrunk by 258MB 00:04:35.640 EAL: Trying to obtain current memory policy. 00:04:35.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.899 EAL: Restoring previous memory policy: 4 00:04:35.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.899 EAL: request: mp_malloc_sync 00:04:35.899 EAL: No shared files mode enabled, IPC is disabled 00:04:35.899 EAL: Heap on socket 0 was expanded by 514MB 00:04:35.899 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.158 EAL: request: mp_malloc_sync 00:04:36.158 EAL: No shared files mode enabled, IPC is disabled 00:04:36.158 EAL: Heap on socket 0 was shrunk by 514MB 00:04:36.158 EAL: Trying to obtain current memory policy. 00:04:36.158 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.490 EAL: Restoring previous memory policy: 4 00:04:36.490 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.490 EAL: request: mp_malloc_sync 00:04:36.490 EAL: No shared files mode enabled, IPC is disabled 00:04:36.490 EAL: Heap on socket 0 was expanded by 1026MB 00:04:36.748 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.008 passed 00:04:37.008 00:04:37.008 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.008 suites 1 1 n/a 0 0 00:04:37.008 tests 2 2 2 0 0 00:04:37.008 asserts 5379 5379 5379 0 n/a 00:04:37.008 00:04:37.008 Elapsed time = 1.738 seconds 00:04:37.008 EAL: request: mp_malloc_sync 00:04:37.008 EAL: No shared files mode enabled, IPC is disabled 00:04:37.008 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:37.008 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.008 EAL: request: mp_malloc_sync 00:04:37.008 EAL: No shared files mode enabled, IPC is disabled 00:04:37.008 EAL: Heap on socket 0 was shrunk by 2MB 00:04:37.008 EAL: No shared files mode enabled, IPC is disabled 00:04:37.008 EAL: No shared files mode enabled, IPC is disabled 00:04:37.008 EAL: No shared files mode enabled, IPC is disabled 00:04:37.008 00:04:37.008 real 0m1.944s 00:04:37.008 user 0m1.094s 00:04:37.008 sys 0m0.705s 00:04:37.008 08:15:38 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.008 ************************************ 00:04:37.008 END TEST env_vtophys 00:04:37.008 ************************************ 00:04:37.008 08:15:38 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:37.008 08:15:38 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:37.008 08:15:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.008 08:15:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.008 08:15:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.008 ************************************ 00:04:37.008 START TEST env_pci 00:04:37.008 ************************************ 00:04:37.008 08:15:38 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:37.008 00:04:37.008 00:04:37.008 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.008 http://cunit.sourceforge.net/ 00:04:37.008 00:04:37.008 00:04:37.008 Suite: pci 00:04:37.008 Test: pci_hook ...[2024-10-15 08:15:38.672011] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56760 has claimed it 00:04:37.008 passed 00:04:37.008 00:04:37.008 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.008 suites 1 1 n/a 0 0 00:04:37.008 tests 1 1 1 0 0 00:04:37.008 asserts 25 25 25 0 n/a 00:04:37.008 00:04:37.008 Elapsed time = 0.002 seconds 00:04:37.008 EAL: Cannot find device (10000:00:01.0) 00:04:37.008 EAL: Failed to attach device on primary process 00:04:37.008 00:04:37.008 real 0m0.020s 00:04:37.008 user 0m0.010s 00:04:37.008 sys 0m0.010s 00:04:37.008 08:15:38 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.008 08:15:38 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:37.008 ************************************ 00:04:37.008 END TEST env_pci 00:04:37.008 ************************************ 00:04:37.008 08:15:38 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:37.008 08:15:38 env -- env/env.sh@15 -- # uname 00:04:37.008 08:15:38 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:37.008 08:15:38 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:37.008 08:15:38 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:37.008 08:15:38 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:37.008 08:15:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.008 08:15:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.008 ************************************ 00:04:37.008 START TEST env_dpdk_post_init 00:04:37.008 ************************************ 00:04:37.008 08:15:38 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:37.268 EAL: Detected CPU lcores: 10 00:04:37.268 EAL: Detected NUMA nodes: 1 00:04:37.268 EAL: Detected shared linkage of DPDK 00:04:37.268 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:37.268 EAL: Selected IOVA mode 'PA' 00:04:37.268 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:37.268 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:37.268 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:37.268 Starting DPDK initialization... 00:04:37.268 Starting SPDK post initialization... 00:04:37.268 SPDK NVMe probe 00:04:37.268 Attaching to 0000:00:10.0 00:04:37.268 Attaching to 0000:00:11.0 00:04:37.268 Attached to 0000:00:10.0 00:04:37.268 Attached to 0000:00:11.0 00:04:37.268 Cleaning up... 00:04:37.268 00:04:37.268 real 0m0.189s 00:04:37.268 user 0m0.044s 00:04:37.268 sys 0m0.044s 00:04:37.268 08:15:38 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.268 08:15:38 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:37.268 ************************************ 00:04:37.268 END TEST env_dpdk_post_init 00:04:37.268 ************************************ 00:04:37.268 08:15:38 env -- env/env.sh@26 -- # uname 00:04:37.268 08:15:38 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:37.268 08:15:38 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:37.268 08:15:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.268 08:15:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.268 08:15:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.268 ************************************ 00:04:37.268 START TEST env_mem_callbacks 00:04:37.268 ************************************ 00:04:37.268 08:15:38 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:37.268 EAL: Detected CPU lcores: 10 00:04:37.268 EAL: Detected NUMA nodes: 1 00:04:37.268 EAL: Detected shared linkage of DPDK 00:04:37.268 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:37.528 EAL: Selected IOVA mode 'PA' 00:04:37.528 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:37.528 00:04:37.528 00:04:37.528 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.528 http://cunit.sourceforge.net/ 00:04:37.528 00:04:37.528 00:04:37.528 Suite: memory 00:04:37.528 Test: test ... 00:04:37.528 register 0x200000200000 2097152 00:04:37.528 malloc 3145728 00:04:37.528 register 0x200000400000 4194304 00:04:37.528 buf 0x200000500000 len 3145728 PASSED 00:04:37.528 malloc 64 00:04:37.528 buf 0x2000004fff40 len 64 PASSED 00:04:37.528 malloc 4194304 00:04:37.528 register 0x200000800000 6291456 00:04:37.528 buf 0x200000a00000 len 4194304 PASSED 00:04:37.528 free 0x200000500000 3145728 00:04:37.528 free 0x2000004fff40 64 00:04:37.528 unregister 0x200000400000 4194304 PASSED 00:04:37.528 free 0x200000a00000 4194304 00:04:37.528 unregister 0x200000800000 6291456 PASSED 00:04:37.528 malloc 8388608 00:04:37.528 register 0x200000400000 10485760 00:04:37.528 buf 0x200000600000 len 8388608 PASSED 00:04:37.528 free 0x200000600000 8388608 00:04:37.528 unregister 0x200000400000 10485760 PASSED 00:04:37.528 passed 00:04:37.528 00:04:37.528 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.528 suites 1 1 n/a 0 0 00:04:37.528 tests 1 1 1 0 0 00:04:37.528 asserts 15 15 15 0 n/a 00:04:37.528 00:04:37.528 Elapsed time = 0.009 seconds 00:04:37.528 00:04:37.528 real 0m0.148s 00:04:37.528 user 0m0.020s 00:04:37.528 sys 0m0.025s 00:04:37.528 08:15:39 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.528 08:15:39 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:37.528 ************************************ 00:04:37.528 END TEST env_mem_callbacks 00:04:37.528 ************************************ 00:04:37.528 00:04:37.528 real 0m2.981s 00:04:37.528 user 0m1.592s 00:04:37.528 sys 0m1.040s 00:04:37.528 08:15:39 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.528 ************************************ 00:04:37.528 08:15:39 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.528 END TEST env 00:04:37.528 ************************************ 00:04:37.528 08:15:39 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:37.528 08:15:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.528 08:15:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.528 08:15:39 -- common/autotest_common.sh@10 -- # set +x 00:04:37.528 ************************************ 00:04:37.528 START TEST rpc 00:04:37.528 ************************************ 00:04:37.528 08:15:39 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:37.787 * Looking for test storage... 00:04:37.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:37.787 08:15:39 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:37.787 08:15:39 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:37.787 08:15:39 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:37.787 08:15:39 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:37.787 08:15:39 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.787 08:15:39 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.787 08:15:39 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.787 08:15:39 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.787 08:15:39 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.787 08:15:39 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.787 08:15:39 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.787 08:15:39 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.787 08:15:39 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.787 08:15:39 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.787 08:15:39 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.787 08:15:39 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:37.787 08:15:39 rpc -- scripts/common.sh@345 -- # : 1 00:04:37.787 08:15:39 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.787 08:15:39 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.787 08:15:39 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:37.787 08:15:39 rpc -- scripts/common.sh@353 -- # local d=1 00:04:37.787 08:15:39 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.787 08:15:39 rpc -- scripts/common.sh@355 -- # echo 1 00:04:37.787 08:15:39 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.787 08:15:39 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:37.787 08:15:39 rpc -- scripts/common.sh@353 -- # local d=2 00:04:37.787 08:15:39 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.787 08:15:39 rpc -- scripts/common.sh@355 -- # echo 2 00:04:37.787 08:15:39 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.787 08:15:39 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.787 08:15:39 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.787 08:15:39 rpc -- scripts/common.sh@368 -- # return 0 00:04:37.787 08:15:39 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.787 08:15:39 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:37.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.787 --rc genhtml_branch_coverage=1 00:04:37.787 --rc genhtml_function_coverage=1 00:04:37.788 --rc genhtml_legend=1 00:04:37.788 --rc geninfo_all_blocks=1 00:04:37.788 --rc geninfo_unexecuted_blocks=1 00:04:37.788 00:04:37.788 ' 00:04:37.788 08:15:39 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:37.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.788 --rc genhtml_branch_coverage=1 00:04:37.788 --rc genhtml_function_coverage=1 00:04:37.788 --rc genhtml_legend=1 00:04:37.788 --rc geninfo_all_blocks=1 00:04:37.788 --rc geninfo_unexecuted_blocks=1 00:04:37.788 00:04:37.788 ' 00:04:37.788 08:15:39 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:37.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.788 --rc genhtml_branch_coverage=1 00:04:37.788 --rc genhtml_function_coverage=1 00:04:37.788 --rc genhtml_legend=1 00:04:37.788 --rc geninfo_all_blocks=1 00:04:37.788 --rc geninfo_unexecuted_blocks=1 00:04:37.788 00:04:37.788 ' 00:04:37.788 08:15:39 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:37.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.788 --rc genhtml_branch_coverage=1 00:04:37.788 --rc genhtml_function_coverage=1 00:04:37.788 --rc genhtml_legend=1 00:04:37.788 --rc geninfo_all_blocks=1 00:04:37.788 --rc geninfo_unexecuted_blocks=1 00:04:37.788 00:04:37.788 ' 00:04:37.788 08:15:39 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56877 00:04:37.788 08:15:39 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:37.788 08:15:39 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.788 08:15:39 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56877 00:04:37.788 08:15:39 rpc -- common/autotest_common.sh@831 -- # '[' -z 56877 ']' 00:04:37.788 08:15:39 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.788 08:15:39 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:37.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.788 08:15:39 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.788 08:15:39 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:37.788 08:15:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.788 [2024-10-15 08:15:39.466626] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:04:37.788 [2024-10-15 08:15:39.466725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56877 ] 00:04:38.046 [2024-10-15 08:15:39.604449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.046 [2024-10-15 08:15:39.696286] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:38.046 [2024-10-15 08:15:39.696363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56877' to capture a snapshot of events at runtime. 00:04:38.046 [2024-10-15 08:15:39.696378] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:38.046 [2024-10-15 08:15:39.696389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:38.046 [2024-10-15 08:15:39.696398] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56877 for offline analysis/debug. 00:04:38.046 [2024-10-15 08:15:39.696998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.306 [2024-10-15 08:15:39.802292] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:38.874 08:15:40 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:38.874 08:15:40 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:38.874 08:15:40 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:38.874 08:15:40 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:38.874 08:15:40 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:38.874 08:15:40 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:38.874 08:15:40 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.874 08:15:40 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.874 08:15:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.874 ************************************ 00:04:38.874 START TEST rpc_integrity 00:04:38.874 ************************************ 00:04:38.874 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:38.874 08:15:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:38.874 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.874 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.874 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.874 08:15:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:38.874 08:15:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:38.874 08:15:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:38.874 08:15:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:38.874 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.874 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.874 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.874 08:15:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:38.874 08:15:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:38.874 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:38.874 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.874 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.874 08:15:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:38.874 { 00:04:38.874 "name": "Malloc0", 00:04:38.874 "aliases": [ 00:04:38.874 "3d05f79b-6064-4638-9364-0188b49c0b2d" 00:04:38.874 ], 00:04:38.874 "product_name": "Malloc disk", 00:04:38.874 "block_size": 512, 00:04:38.874 "num_blocks": 16384, 00:04:38.874 "uuid": "3d05f79b-6064-4638-9364-0188b49c0b2d", 00:04:38.874 "assigned_rate_limits": { 00:04:38.874 "rw_ios_per_sec": 0, 00:04:38.874 "rw_mbytes_per_sec": 0, 00:04:38.874 "r_mbytes_per_sec": 0, 00:04:38.874 "w_mbytes_per_sec": 0 00:04:38.874 }, 00:04:38.874 "claimed": false, 00:04:38.874 "zoned": false, 00:04:38.874 "supported_io_types": { 00:04:38.874 "read": true, 00:04:38.874 "write": true, 00:04:38.874 "unmap": true, 00:04:38.874 "flush": true, 00:04:38.874 "reset": true, 00:04:38.874 "nvme_admin": false, 00:04:38.874 "nvme_io": false, 00:04:38.874 "nvme_io_md": false, 00:04:38.874 "write_zeroes": true, 00:04:38.874 "zcopy": true, 00:04:38.874 "get_zone_info": false, 00:04:38.874 "zone_management": false, 00:04:38.874 "zone_append": false, 00:04:38.874 "compare": false, 00:04:38.874 "compare_and_write": false, 00:04:38.874 "abort": true, 00:04:38.874 "seek_hole": false, 00:04:38.874 "seek_data": false, 00:04:38.874 "copy": true, 00:04:38.874 "nvme_iov_md": false 00:04:38.874 }, 00:04:38.874 "memory_domains": [ 00:04:38.874 { 00:04:38.874 "dma_device_id": "system", 00:04:38.874 "dma_device_type": 1 00:04:38.874 }, 00:04:38.874 { 00:04:38.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.874 "dma_device_type": 2 00:04:38.874 } 00:04:38.874 ], 00:04:38.874 "driver_specific": {} 00:04:38.874 } 00:04:38.874 ]' 00:04:38.874 08:15:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:39.134 08:15:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:39.134 08:15:40 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:39.134 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.134 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.134 [2024-10-15 08:15:40.657051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:39.134 [2024-10-15 08:15:40.657154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:39.134 [2024-10-15 08:15:40.657195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc8d120 00:04:39.134 [2024-10-15 08:15:40.657207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:39.134 [2024-10-15 08:15:40.659039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:39.134 [2024-10-15 08:15:40.659075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:39.134 Passthru0 00:04:39.134 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.134 08:15:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:39.134 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.134 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.134 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.134 08:15:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:39.134 { 00:04:39.134 "name": "Malloc0", 00:04:39.134 "aliases": [ 00:04:39.134 "3d05f79b-6064-4638-9364-0188b49c0b2d" 00:04:39.134 ], 00:04:39.134 "product_name": "Malloc disk", 00:04:39.134 "block_size": 512, 00:04:39.134 "num_blocks": 16384, 00:04:39.134 "uuid": "3d05f79b-6064-4638-9364-0188b49c0b2d", 00:04:39.134 "assigned_rate_limits": { 00:04:39.134 "rw_ios_per_sec": 0, 00:04:39.134 "rw_mbytes_per_sec": 0, 00:04:39.134 "r_mbytes_per_sec": 0, 00:04:39.134 "w_mbytes_per_sec": 0 00:04:39.134 }, 00:04:39.134 "claimed": true, 00:04:39.134 "claim_type": "exclusive_write", 00:04:39.134 "zoned": false, 00:04:39.134 "supported_io_types": { 00:04:39.134 "read": true, 00:04:39.134 "write": true, 00:04:39.134 "unmap": true, 00:04:39.134 "flush": true, 00:04:39.134 "reset": true, 00:04:39.134 "nvme_admin": false, 00:04:39.134 "nvme_io": false, 00:04:39.134 "nvme_io_md": false, 00:04:39.134 "write_zeroes": true, 00:04:39.134 "zcopy": true, 00:04:39.134 "get_zone_info": false, 00:04:39.134 "zone_management": false, 00:04:39.134 "zone_append": false, 00:04:39.134 "compare": false, 00:04:39.134 "compare_and_write": false, 00:04:39.134 "abort": true, 00:04:39.134 "seek_hole": false, 00:04:39.134 "seek_data": false, 00:04:39.134 "copy": true, 00:04:39.134 "nvme_iov_md": false 00:04:39.134 }, 00:04:39.134 "memory_domains": [ 00:04:39.134 { 00:04:39.134 "dma_device_id": "system", 00:04:39.134 "dma_device_type": 1 00:04:39.134 }, 00:04:39.134 { 00:04:39.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.134 "dma_device_type": 2 00:04:39.134 } 00:04:39.134 ], 00:04:39.134 "driver_specific": {} 00:04:39.134 }, 00:04:39.134 { 00:04:39.134 "name": "Passthru0", 00:04:39.134 "aliases": [ 00:04:39.134 "d9769d1a-fa83-5509-afec-4aa5c836f00c" 00:04:39.134 ], 00:04:39.134 "product_name": "passthru", 00:04:39.134 "block_size": 512, 00:04:39.134 "num_blocks": 16384, 00:04:39.134 "uuid": "d9769d1a-fa83-5509-afec-4aa5c836f00c", 00:04:39.134 "assigned_rate_limits": { 00:04:39.134 "rw_ios_per_sec": 0, 00:04:39.134 "rw_mbytes_per_sec": 0, 00:04:39.134 "r_mbytes_per_sec": 0, 00:04:39.134 "w_mbytes_per_sec": 0 00:04:39.134 }, 00:04:39.134 "claimed": false, 00:04:39.134 "zoned": false, 00:04:39.134 "supported_io_types": { 00:04:39.134 "read": true, 00:04:39.134 "write": true, 00:04:39.134 "unmap": true, 00:04:39.134 "flush": true, 00:04:39.134 "reset": true, 00:04:39.134 "nvme_admin": false, 00:04:39.134 "nvme_io": false, 00:04:39.134 "nvme_io_md": false, 00:04:39.134 "write_zeroes": true, 00:04:39.134 "zcopy": true, 00:04:39.134 "get_zone_info": false, 00:04:39.134 "zone_management": false, 00:04:39.134 "zone_append": false, 00:04:39.134 "compare": false, 00:04:39.134 "compare_and_write": false, 00:04:39.134 "abort": true, 00:04:39.134 "seek_hole": false, 00:04:39.134 "seek_data": false, 00:04:39.134 "copy": true, 00:04:39.134 "nvme_iov_md": false 00:04:39.134 }, 00:04:39.134 "memory_domains": [ 00:04:39.134 { 00:04:39.134 "dma_device_id": "system", 00:04:39.134 "dma_device_type": 1 00:04:39.134 }, 00:04:39.134 { 00:04:39.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.134 "dma_device_type": 2 00:04:39.134 } 00:04:39.134 ], 00:04:39.134 "driver_specific": { 00:04:39.134 "passthru": { 00:04:39.134 "name": "Passthru0", 00:04:39.134 "base_bdev_name": "Malloc0" 00:04:39.134 } 00:04:39.134 } 00:04:39.134 } 00:04:39.134 ]' 00:04:39.134 08:15:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:39.134 08:15:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:39.134 08:15:40 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:39.134 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.134 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.134 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.134 08:15:40 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:39.134 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.134 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.134 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.134 08:15:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:39.134 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.134 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.134 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.134 08:15:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:39.134 08:15:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:39.134 08:15:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:39.134 00:04:39.134 real 0m0.337s 00:04:39.134 user 0m0.221s 00:04:39.134 sys 0m0.047s 00:04:39.134 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.134 08:15:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.134 ************************************ 00:04:39.134 END TEST rpc_integrity 00:04:39.134 ************************************ 00:04:39.394 08:15:40 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:39.394 08:15:40 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.394 08:15:40 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.394 08:15:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.394 ************************************ 00:04:39.394 START TEST rpc_plugins 00:04:39.394 ************************************ 00:04:39.394 08:15:40 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:39.394 08:15:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:39.394 08:15:40 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.394 08:15:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.394 08:15:40 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.394 08:15:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:39.394 08:15:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:39.394 08:15:40 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.394 08:15:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.394 08:15:40 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.394 08:15:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:39.394 { 00:04:39.394 "name": "Malloc1", 00:04:39.394 "aliases": [ 00:04:39.394 "3a94f20a-c38e-4d0c-865c-f68baff55f3a" 00:04:39.394 ], 00:04:39.394 "product_name": "Malloc disk", 00:04:39.394 "block_size": 4096, 00:04:39.394 "num_blocks": 256, 00:04:39.394 "uuid": "3a94f20a-c38e-4d0c-865c-f68baff55f3a", 00:04:39.394 "assigned_rate_limits": { 00:04:39.394 "rw_ios_per_sec": 0, 00:04:39.394 "rw_mbytes_per_sec": 0, 00:04:39.394 "r_mbytes_per_sec": 0, 00:04:39.394 "w_mbytes_per_sec": 0 00:04:39.394 }, 00:04:39.394 "claimed": false, 00:04:39.394 "zoned": false, 00:04:39.394 "supported_io_types": { 00:04:39.394 "read": true, 00:04:39.394 "write": true, 00:04:39.394 "unmap": true, 00:04:39.394 "flush": true, 00:04:39.394 "reset": true, 00:04:39.394 "nvme_admin": false, 00:04:39.394 "nvme_io": false, 00:04:39.394 "nvme_io_md": false, 00:04:39.394 "write_zeroes": true, 00:04:39.394 "zcopy": true, 00:04:39.394 "get_zone_info": false, 00:04:39.394 "zone_management": false, 00:04:39.394 "zone_append": false, 00:04:39.394 "compare": false, 00:04:39.394 "compare_and_write": false, 00:04:39.394 "abort": true, 00:04:39.394 "seek_hole": false, 00:04:39.394 "seek_data": false, 00:04:39.394 "copy": true, 00:04:39.394 "nvme_iov_md": false 00:04:39.394 }, 00:04:39.394 "memory_domains": [ 00:04:39.394 { 00:04:39.394 "dma_device_id": "system", 00:04:39.394 "dma_device_type": 1 00:04:39.394 }, 00:04:39.394 { 00:04:39.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.394 "dma_device_type": 2 00:04:39.394 } 00:04:39.394 ], 00:04:39.394 "driver_specific": {} 00:04:39.394 } 00:04:39.394 ]' 00:04:39.394 08:15:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:39.394 08:15:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:39.394 08:15:40 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:39.394 08:15:40 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.394 08:15:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.394 08:15:40 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.394 08:15:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:39.394 08:15:40 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.394 08:15:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.394 08:15:40 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.394 08:15:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:39.394 08:15:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:39.394 08:15:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:39.394 00:04:39.394 real 0m0.148s 00:04:39.394 user 0m0.092s 00:04:39.394 sys 0m0.017s 00:04:39.394 08:15:41 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.394 ************************************ 00:04:39.394 END TEST rpc_plugins 00:04:39.394 ************************************ 00:04:39.394 08:15:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.394 08:15:41 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:39.394 08:15:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.394 08:15:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.394 08:15:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.394 ************************************ 00:04:39.394 START TEST rpc_trace_cmd_test 00:04:39.394 ************************************ 00:04:39.394 08:15:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:39.394 08:15:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:39.394 08:15:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:39.394 08:15:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.394 08:15:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:39.394 08:15:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.394 08:15:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:39.394 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56877", 00:04:39.394 "tpoint_group_mask": "0x8", 00:04:39.394 "iscsi_conn": { 00:04:39.394 "mask": "0x2", 00:04:39.394 "tpoint_mask": "0x0" 00:04:39.394 }, 00:04:39.394 "scsi": { 00:04:39.394 "mask": "0x4", 00:04:39.394 "tpoint_mask": "0x0" 00:04:39.394 }, 00:04:39.394 "bdev": { 00:04:39.394 "mask": "0x8", 00:04:39.394 "tpoint_mask": "0xffffffffffffffff" 00:04:39.394 }, 00:04:39.394 "nvmf_rdma": { 00:04:39.394 "mask": "0x10", 00:04:39.394 "tpoint_mask": "0x0" 00:04:39.394 }, 00:04:39.394 "nvmf_tcp": { 00:04:39.394 "mask": "0x20", 00:04:39.394 "tpoint_mask": "0x0" 00:04:39.394 }, 00:04:39.394 "ftl": { 00:04:39.394 "mask": "0x40", 00:04:39.394 "tpoint_mask": "0x0" 00:04:39.394 }, 00:04:39.394 "blobfs": { 00:04:39.394 "mask": "0x80", 00:04:39.394 "tpoint_mask": "0x0" 00:04:39.394 }, 00:04:39.394 "dsa": { 00:04:39.394 "mask": "0x200", 00:04:39.394 "tpoint_mask": "0x0" 00:04:39.394 }, 00:04:39.394 "thread": { 00:04:39.394 "mask": "0x400", 00:04:39.394 "tpoint_mask": "0x0" 00:04:39.394 }, 00:04:39.394 "nvme_pcie": { 00:04:39.394 "mask": "0x800", 00:04:39.394 "tpoint_mask": "0x0" 00:04:39.394 }, 00:04:39.394 "iaa": { 00:04:39.395 "mask": "0x1000", 00:04:39.395 "tpoint_mask": "0x0" 00:04:39.395 }, 00:04:39.395 "nvme_tcp": { 00:04:39.395 "mask": "0x2000", 00:04:39.395 "tpoint_mask": "0x0" 00:04:39.395 }, 00:04:39.395 "bdev_nvme": { 00:04:39.395 "mask": "0x4000", 00:04:39.395 "tpoint_mask": "0x0" 00:04:39.395 }, 00:04:39.395 "sock": { 00:04:39.395 "mask": "0x8000", 00:04:39.395 "tpoint_mask": "0x0" 00:04:39.395 }, 00:04:39.395 "blob": { 00:04:39.395 "mask": "0x10000", 00:04:39.395 "tpoint_mask": "0x0" 00:04:39.395 }, 00:04:39.395 "bdev_raid": { 00:04:39.395 "mask": "0x20000", 00:04:39.395 "tpoint_mask": "0x0" 00:04:39.395 }, 00:04:39.395 "scheduler": { 00:04:39.395 "mask": "0x40000", 00:04:39.395 "tpoint_mask": "0x0" 00:04:39.395 } 00:04:39.395 }' 00:04:39.395 08:15:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:39.653 08:15:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:39.653 08:15:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:39.653 08:15:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:39.653 08:15:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:39.653 08:15:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:39.653 08:15:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:39.653 08:15:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:39.653 08:15:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:39.653 08:15:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:39.653 00:04:39.653 real 0m0.252s 00:04:39.653 user 0m0.218s 00:04:39.653 sys 0m0.022s 00:04:39.653 08:15:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.653 ************************************ 00:04:39.653 END TEST rpc_trace_cmd_test 00:04:39.653 ************************************ 00:04:39.653 08:15:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:39.653 08:15:41 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:39.653 08:15:41 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:39.653 08:15:41 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:39.653 08:15:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.653 08:15:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.653 08:15:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.653 ************************************ 00:04:39.653 START TEST rpc_daemon_integrity 00:04:39.653 ************************************ 00:04:39.653 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:39.654 08:15:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:39.654 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.654 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:39.913 { 00:04:39.913 "name": "Malloc2", 00:04:39.913 "aliases": [ 00:04:39.913 "9bfda699-bf0d-4049-aad0-4e696ab6e139" 00:04:39.913 ], 00:04:39.913 "product_name": "Malloc disk", 00:04:39.913 "block_size": 512, 00:04:39.913 "num_blocks": 16384, 00:04:39.913 "uuid": "9bfda699-bf0d-4049-aad0-4e696ab6e139", 00:04:39.913 "assigned_rate_limits": { 00:04:39.913 "rw_ios_per_sec": 0, 00:04:39.913 "rw_mbytes_per_sec": 0, 00:04:39.913 "r_mbytes_per_sec": 0, 00:04:39.913 "w_mbytes_per_sec": 0 00:04:39.913 }, 00:04:39.913 "claimed": false, 00:04:39.913 "zoned": false, 00:04:39.913 "supported_io_types": { 00:04:39.913 "read": true, 00:04:39.913 "write": true, 00:04:39.913 "unmap": true, 00:04:39.913 "flush": true, 00:04:39.913 "reset": true, 00:04:39.913 "nvme_admin": false, 00:04:39.913 "nvme_io": false, 00:04:39.913 "nvme_io_md": false, 00:04:39.913 "write_zeroes": true, 00:04:39.913 "zcopy": true, 00:04:39.913 "get_zone_info": false, 00:04:39.913 "zone_management": false, 00:04:39.913 "zone_append": false, 00:04:39.913 "compare": false, 00:04:39.913 "compare_and_write": false, 00:04:39.913 "abort": true, 00:04:39.913 "seek_hole": false, 00:04:39.913 "seek_data": false, 00:04:39.913 "copy": true, 00:04:39.913 "nvme_iov_md": false 00:04:39.913 }, 00:04:39.913 "memory_domains": [ 00:04:39.913 { 00:04:39.913 "dma_device_id": "system", 00:04:39.913 "dma_device_type": 1 00:04:39.913 }, 00:04:39.913 { 00:04:39.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.913 "dma_device_type": 2 00:04:39.913 } 00:04:39.913 ], 00:04:39.913 "driver_specific": {} 00:04:39.913 } 00:04:39.913 ]' 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.913 [2024-10-15 08:15:41.543539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:39.913 [2024-10-15 08:15:41.543608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:39.913 [2024-10-15 08:15:41.543632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc9aa90 00:04:39.913 [2024-10-15 08:15:41.543643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:39.913 [2024-10-15 08:15:41.546007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:39.913 [2024-10-15 08:15:41.546046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:39.913 Passthru0 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.913 08:15:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:39.913 { 00:04:39.913 "name": "Malloc2", 00:04:39.913 "aliases": [ 00:04:39.913 "9bfda699-bf0d-4049-aad0-4e696ab6e139" 00:04:39.913 ], 00:04:39.913 "product_name": "Malloc disk", 00:04:39.913 "block_size": 512, 00:04:39.913 "num_blocks": 16384, 00:04:39.913 "uuid": "9bfda699-bf0d-4049-aad0-4e696ab6e139", 00:04:39.913 "assigned_rate_limits": { 00:04:39.913 "rw_ios_per_sec": 0, 00:04:39.913 "rw_mbytes_per_sec": 0, 00:04:39.913 "r_mbytes_per_sec": 0, 00:04:39.913 "w_mbytes_per_sec": 0 00:04:39.913 }, 00:04:39.913 "claimed": true, 00:04:39.913 "claim_type": "exclusive_write", 00:04:39.913 "zoned": false, 00:04:39.913 "supported_io_types": { 00:04:39.913 "read": true, 00:04:39.913 "write": true, 00:04:39.913 "unmap": true, 00:04:39.913 "flush": true, 00:04:39.913 "reset": true, 00:04:39.913 "nvme_admin": false, 00:04:39.913 "nvme_io": false, 00:04:39.913 "nvme_io_md": false, 00:04:39.913 "write_zeroes": true, 00:04:39.913 "zcopy": true, 00:04:39.913 "get_zone_info": false, 00:04:39.913 "zone_management": false, 00:04:39.913 "zone_append": false, 00:04:39.913 "compare": false, 00:04:39.913 "compare_and_write": false, 00:04:39.913 "abort": true, 00:04:39.913 "seek_hole": false, 00:04:39.913 "seek_data": false, 00:04:39.913 "copy": true, 00:04:39.913 "nvme_iov_md": false 00:04:39.913 }, 00:04:39.913 "memory_domains": [ 00:04:39.913 { 00:04:39.913 "dma_device_id": "system", 00:04:39.913 "dma_device_type": 1 00:04:39.913 }, 00:04:39.913 { 00:04:39.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.913 "dma_device_type": 2 00:04:39.913 } 00:04:39.913 ], 00:04:39.913 "driver_specific": {} 00:04:39.913 }, 00:04:39.913 { 00:04:39.913 "name": "Passthru0", 00:04:39.913 "aliases": [ 00:04:39.913 "1e43b215-6a76-59a7-888a-e26edb63c87f" 00:04:39.913 ], 00:04:39.913 "product_name": "passthru", 00:04:39.913 "block_size": 512, 00:04:39.913 "num_blocks": 16384, 00:04:39.913 "uuid": "1e43b215-6a76-59a7-888a-e26edb63c87f", 00:04:39.914 "assigned_rate_limits": { 00:04:39.914 "rw_ios_per_sec": 0, 00:04:39.914 "rw_mbytes_per_sec": 0, 00:04:39.914 "r_mbytes_per_sec": 0, 00:04:39.914 "w_mbytes_per_sec": 0 00:04:39.914 }, 00:04:39.914 "claimed": false, 00:04:39.914 "zoned": false, 00:04:39.914 "supported_io_types": { 00:04:39.914 "read": true, 00:04:39.914 "write": true, 00:04:39.914 "unmap": true, 00:04:39.914 "flush": true, 00:04:39.914 "reset": true, 00:04:39.914 "nvme_admin": false, 00:04:39.914 "nvme_io": false, 00:04:39.914 "nvme_io_md": false, 00:04:39.914 "write_zeroes": true, 00:04:39.914 "zcopy": true, 00:04:39.914 "get_zone_info": false, 00:04:39.914 "zone_management": false, 00:04:39.914 "zone_append": false, 00:04:39.914 "compare": false, 00:04:39.914 "compare_and_write": false, 00:04:39.914 "abort": true, 00:04:39.914 "seek_hole": false, 00:04:39.914 "seek_data": false, 00:04:39.914 "copy": true, 00:04:39.914 "nvme_iov_md": false 00:04:39.914 }, 00:04:39.914 "memory_domains": [ 00:04:39.914 { 00:04:39.914 "dma_device_id": "system", 00:04:39.914 "dma_device_type": 1 00:04:39.914 }, 00:04:39.914 { 00:04:39.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.914 "dma_device_type": 2 00:04:39.914 } 00:04:39.914 ], 00:04:39.914 "driver_specific": { 00:04:39.914 "passthru": { 00:04:39.914 "name": "Passthru0", 00:04:39.914 "base_bdev_name": "Malloc2" 00:04:39.914 } 00:04:39.914 } 00:04:39.914 } 00:04:39.914 ]' 00:04:39.914 08:15:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:39.914 08:15:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:39.914 08:15:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:39.914 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.914 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.184 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.184 08:15:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:40.184 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.185 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.185 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.185 08:15:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:40.185 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:40.185 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.185 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:40.185 08:15:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:40.185 08:15:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:40.185 08:15:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:40.185 00:04:40.185 real 0m0.338s 00:04:40.185 user 0m0.220s 00:04:40.185 sys 0m0.046s 00:04:40.185 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.185 08:15:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.185 ************************************ 00:04:40.185 END TEST rpc_daemon_integrity 00:04:40.185 ************************************ 00:04:40.185 08:15:41 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:40.185 08:15:41 rpc -- rpc/rpc.sh@84 -- # killprocess 56877 00:04:40.185 08:15:41 rpc -- common/autotest_common.sh@950 -- # '[' -z 56877 ']' 00:04:40.185 08:15:41 rpc -- common/autotest_common.sh@954 -- # kill -0 56877 00:04:40.185 08:15:41 rpc -- common/autotest_common.sh@955 -- # uname 00:04:40.185 08:15:41 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:40.185 08:15:41 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56877 00:04:40.185 08:15:41 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:40.185 08:15:41 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:40.185 killing process with pid 56877 00:04:40.185 08:15:41 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56877' 00:04:40.185 08:15:41 rpc -- common/autotest_common.sh@969 -- # kill 56877 00:04:40.185 08:15:41 rpc -- common/autotest_common.sh@974 -- # wait 56877 00:04:40.806 00:04:40.806 real 0m3.119s 00:04:40.806 user 0m3.845s 00:04:40.806 sys 0m0.829s 00:04:40.806 08:15:42 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.806 08:15:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.806 ************************************ 00:04:40.806 END TEST rpc 00:04:40.806 ************************************ 00:04:40.806 08:15:42 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:40.806 08:15:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.806 08:15:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.806 08:15:42 -- common/autotest_common.sh@10 -- # set +x 00:04:40.806 ************************************ 00:04:40.806 START TEST skip_rpc 00:04:40.806 ************************************ 00:04:40.806 08:15:42 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:40.806 * Looking for test storage... 00:04:40.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:40.806 08:15:42 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:40.806 08:15:42 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:40.806 08:15:42 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:41.066 08:15:42 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.066 08:15:42 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:41.066 08:15:42 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.066 08:15:42 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:41.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.066 --rc genhtml_branch_coverage=1 00:04:41.066 --rc genhtml_function_coverage=1 00:04:41.066 --rc genhtml_legend=1 00:04:41.066 --rc geninfo_all_blocks=1 00:04:41.066 --rc geninfo_unexecuted_blocks=1 00:04:41.066 00:04:41.066 ' 00:04:41.066 08:15:42 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:41.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.066 --rc genhtml_branch_coverage=1 00:04:41.066 --rc genhtml_function_coverage=1 00:04:41.066 --rc genhtml_legend=1 00:04:41.066 --rc geninfo_all_blocks=1 00:04:41.066 --rc geninfo_unexecuted_blocks=1 00:04:41.066 00:04:41.066 ' 00:04:41.066 08:15:42 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:41.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.066 --rc genhtml_branch_coverage=1 00:04:41.066 --rc genhtml_function_coverage=1 00:04:41.066 --rc genhtml_legend=1 00:04:41.066 --rc geninfo_all_blocks=1 00:04:41.066 --rc geninfo_unexecuted_blocks=1 00:04:41.066 00:04:41.066 ' 00:04:41.066 08:15:42 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:41.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.066 --rc genhtml_branch_coverage=1 00:04:41.066 --rc genhtml_function_coverage=1 00:04:41.066 --rc genhtml_legend=1 00:04:41.066 --rc geninfo_all_blocks=1 00:04:41.066 --rc geninfo_unexecuted_blocks=1 00:04:41.066 00:04:41.066 ' 00:04:41.066 08:15:42 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:41.066 08:15:42 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:41.066 08:15:42 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:41.066 08:15:42 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.066 08:15:42 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.066 08:15:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.066 ************************************ 00:04:41.066 START TEST skip_rpc 00:04:41.066 ************************************ 00:04:41.066 08:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:41.066 08:15:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57089 00:04:41.066 08:15:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.066 08:15:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:41.066 08:15:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:41.066 [2024-10-15 08:15:42.647811] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:04:41.066 [2024-10-15 08:15:42.647991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57089 ] 00:04:41.066 [2024-10-15 08:15:42.789649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.325 [2024-10-15 08:15:42.869044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.325 [2024-10-15 08:15:42.969894] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57089 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57089 ']' 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57089 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57089 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:46.597 killing process with pid 57089 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57089' 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57089 00:04:46.597 08:15:47 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57089 00:04:46.597 00:04:46.597 real 0m5.597s 00:04:46.597 user 0m5.136s 00:04:46.597 sys 0m0.371s 00:04:46.597 08:15:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.597 08:15:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.597 ************************************ 00:04:46.597 END TEST skip_rpc 00:04:46.598 ************************************ 00:04:46.598 08:15:48 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:46.598 08:15:48 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.598 08:15:48 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.598 08:15:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.598 ************************************ 00:04:46.598 START TEST skip_rpc_with_json 00:04:46.598 ************************************ 00:04:46.598 08:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:46.598 08:15:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:46.598 08:15:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57175 00:04:46.598 08:15:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.598 08:15:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.598 08:15:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57175 00:04:46.598 08:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57175 ']' 00:04:46.598 08:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.598 08:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:46.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.598 08:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.598 08:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:46.598 08:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.598 [2024-10-15 08:15:48.298923] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:04:46.598 [2024-10-15 08:15:48.299068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57175 ] 00:04:46.856 [2024-10-15 08:15:48.438166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.856 [2024-10-15 08:15:48.519254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.115 [2024-10-15 08:15:48.620663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:47.683 08:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:47.683 08:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:47.684 08:15:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:47.684 08:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.684 08:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.684 [2024-10-15 08:15:49.343906] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:47.684 request: 00:04:47.684 { 00:04:47.684 "trtype": "tcp", 00:04:47.684 "method": "nvmf_get_transports", 00:04:47.684 "req_id": 1 00:04:47.684 } 00:04:47.684 Got JSON-RPC error response 00:04:47.684 response: 00:04:47.684 { 00:04:47.684 "code": -19, 00:04:47.684 "message": "No such device" 00:04:47.684 } 00:04:47.684 08:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:47.684 08:15:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:47.684 08:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.684 08:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.684 [2024-10-15 08:15:49.355988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.684 08:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.684 08:15:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:47.684 08:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.684 08:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.943 08:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.943 08:15:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:47.943 { 00:04:47.943 "subsystems": [ 00:04:47.943 { 00:04:47.943 "subsystem": "fsdev", 00:04:47.943 "config": [ 00:04:47.943 { 00:04:47.943 "method": "fsdev_set_opts", 00:04:47.943 "params": { 00:04:47.943 "fsdev_io_pool_size": 65535, 00:04:47.943 "fsdev_io_cache_size": 256 00:04:47.943 } 00:04:47.943 } 00:04:47.943 ] 00:04:47.943 }, 00:04:47.943 { 00:04:47.943 "subsystem": "keyring", 00:04:47.943 "config": [] 00:04:47.943 }, 00:04:47.943 { 00:04:47.943 "subsystem": "iobuf", 00:04:47.943 "config": [ 00:04:47.943 { 00:04:47.943 "method": "iobuf_set_options", 00:04:47.943 "params": { 00:04:47.943 "small_pool_count": 8192, 00:04:47.943 "large_pool_count": 1024, 00:04:47.943 "small_bufsize": 8192, 00:04:47.943 "large_bufsize": 135168 00:04:47.943 } 00:04:47.943 } 00:04:47.943 ] 00:04:47.943 }, 00:04:47.943 { 00:04:47.943 "subsystem": "sock", 00:04:47.943 "config": [ 00:04:47.943 { 00:04:47.943 "method": "sock_set_default_impl", 00:04:47.943 "params": { 00:04:47.943 "impl_name": "uring" 00:04:47.943 } 00:04:47.943 }, 00:04:47.943 { 00:04:47.943 "method": "sock_impl_set_options", 00:04:47.943 "params": { 00:04:47.943 "impl_name": "ssl", 00:04:47.943 "recv_buf_size": 4096, 00:04:47.943 "send_buf_size": 4096, 00:04:47.943 "enable_recv_pipe": true, 00:04:47.943 "enable_quickack": false, 00:04:47.943 "enable_placement_id": 0, 00:04:47.943 "enable_zerocopy_send_server": true, 00:04:47.943 "enable_zerocopy_send_client": false, 00:04:47.943 "zerocopy_threshold": 0, 00:04:47.943 "tls_version": 0, 00:04:47.943 "enable_ktls": false 00:04:47.943 } 00:04:47.943 }, 00:04:47.943 { 00:04:47.943 "method": "sock_impl_set_options", 00:04:47.943 "params": { 00:04:47.943 "impl_name": "posix", 00:04:47.943 "recv_buf_size": 2097152, 00:04:47.943 "send_buf_size": 2097152, 00:04:47.943 "enable_recv_pipe": true, 00:04:47.943 "enable_quickack": false, 00:04:47.943 "enable_placement_id": 0, 00:04:47.943 "enable_zerocopy_send_server": true, 00:04:47.943 "enable_zerocopy_send_client": false, 00:04:47.943 "zerocopy_threshold": 0, 00:04:47.943 "tls_version": 0, 00:04:47.943 "enable_ktls": false 00:04:47.943 } 00:04:47.943 }, 00:04:47.943 { 00:04:47.943 "method": "sock_impl_set_options", 00:04:47.943 "params": { 00:04:47.943 "impl_name": "uring", 00:04:47.943 "recv_buf_size": 2097152, 00:04:47.943 "send_buf_size": 2097152, 00:04:47.943 "enable_recv_pipe": true, 00:04:47.943 "enable_quickack": false, 00:04:47.943 "enable_placement_id": 0, 00:04:47.943 "enable_zerocopy_send_server": false, 00:04:47.943 "enable_zerocopy_send_client": false, 00:04:47.943 "zerocopy_threshold": 0, 00:04:47.943 "tls_version": 0, 00:04:47.943 "enable_ktls": false 00:04:47.943 } 00:04:47.943 } 00:04:47.943 ] 00:04:47.943 }, 00:04:47.943 { 00:04:47.943 "subsystem": "vmd", 00:04:47.943 "config": [] 00:04:47.943 }, 00:04:47.943 { 00:04:47.943 "subsystem": "accel", 00:04:47.943 "config": [ 00:04:47.943 { 00:04:47.943 "method": "accel_set_options", 00:04:47.943 "params": { 00:04:47.943 "small_cache_size": 128, 00:04:47.943 "large_cache_size": 16, 00:04:47.943 "task_count": 2048, 00:04:47.943 "sequence_count": 2048, 00:04:47.943 "buf_count": 2048 00:04:47.943 } 00:04:47.943 } 00:04:47.943 ] 00:04:47.943 }, 00:04:47.943 { 00:04:47.943 "subsystem": "bdev", 00:04:47.943 "config": [ 00:04:47.943 { 00:04:47.943 "method": "bdev_set_options", 00:04:47.943 "params": { 00:04:47.943 "bdev_io_pool_size": 65535, 00:04:47.943 "bdev_io_cache_size": 256, 00:04:47.943 "bdev_auto_examine": true, 00:04:47.943 "iobuf_small_cache_size": 128, 00:04:47.943 "iobuf_large_cache_size": 16 00:04:47.943 } 00:04:47.943 }, 00:04:47.943 { 00:04:47.943 "method": "bdev_raid_set_options", 00:04:47.943 "params": { 00:04:47.943 "process_window_size_kb": 1024, 00:04:47.943 "process_max_bandwidth_mb_sec": 0 00:04:47.943 } 00:04:47.943 }, 00:04:47.943 { 00:04:47.943 "method": "bdev_iscsi_set_options", 00:04:47.943 "params": { 00:04:47.943 "timeout_sec": 30 00:04:47.943 } 00:04:47.943 }, 00:04:47.943 { 00:04:47.943 "method": "bdev_nvme_set_options", 00:04:47.943 "params": { 00:04:47.943 "action_on_timeout": "none", 00:04:47.943 "timeout_us": 0, 00:04:47.943 "timeout_admin_us": 0, 00:04:47.943 "keep_alive_timeout_ms": 10000, 00:04:47.943 "arbitration_burst": 0, 00:04:47.943 "low_priority_weight": 0, 00:04:47.943 "medium_priority_weight": 0, 00:04:47.943 "high_priority_weight": 0, 00:04:47.943 "nvme_adminq_poll_period_us": 10000, 00:04:47.943 "nvme_ioq_poll_period_us": 0, 00:04:47.943 "io_queue_requests": 0, 00:04:47.943 "delay_cmd_submit": true, 00:04:47.943 "transport_retry_count": 4, 00:04:47.943 "bdev_retry_count": 3, 00:04:47.943 "transport_ack_timeout": 0, 00:04:47.943 "ctrlr_loss_timeout_sec": 0, 00:04:47.943 "reconnect_delay_sec": 0, 00:04:47.943 "fast_io_fail_timeout_sec": 0, 00:04:47.943 "disable_auto_failback": false, 00:04:47.943 "generate_uuids": false, 00:04:47.943 "transport_tos": 0, 00:04:47.943 "nvme_error_stat": false, 00:04:47.943 "rdma_srq_size": 0, 00:04:47.943 "io_path_stat": false, 00:04:47.943 "allow_accel_sequence": false, 00:04:47.943 "rdma_max_cq_size": 0, 00:04:47.943 "rdma_cm_event_timeout_ms": 0, 00:04:47.943 "dhchap_digests": [ 00:04:47.943 "sha256", 00:04:47.943 "sha384", 00:04:47.943 "sha512" 00:04:47.943 ], 00:04:47.943 "dhchap_dhgroups": [ 00:04:47.943 "null", 00:04:47.943 "ffdhe2048", 00:04:47.943 "ffdhe3072", 00:04:47.943 "ffdhe4096", 00:04:47.943 "ffdhe6144", 00:04:47.943 "ffdhe8192" 00:04:47.943 ] 00:04:47.943 } 00:04:47.943 }, 00:04:47.943 { 00:04:47.943 "method": "bdev_nvme_set_hotplug", 00:04:47.943 "params": { 00:04:47.943 "period_us": 100000, 00:04:47.943 "enable": false 00:04:47.943 } 00:04:47.943 }, 00:04:47.943 { 00:04:47.943 "method": "bdev_wait_for_examine" 00:04:47.943 } 00:04:47.943 ] 00:04:47.943 }, 00:04:47.943 { 00:04:47.943 "subsystem": "scsi", 00:04:47.943 "config": null 00:04:47.943 }, 00:04:47.943 { 00:04:47.943 "subsystem": "scheduler", 00:04:47.943 "config": [ 00:04:47.943 { 00:04:47.943 "method": "framework_set_scheduler", 00:04:47.943 "params": { 00:04:47.943 "name": "static" 00:04:47.943 } 00:04:47.943 } 00:04:47.943 ] 00:04:47.943 }, 00:04:47.943 { 00:04:47.943 "subsystem": "vhost_scsi", 00:04:47.943 "config": [] 00:04:47.943 }, 00:04:47.943 { 00:04:47.943 "subsystem": "vhost_blk", 00:04:47.943 "config": [] 00:04:47.943 }, 00:04:47.943 { 00:04:47.943 "subsystem": "ublk", 00:04:47.943 "config": [] 00:04:47.943 }, 00:04:47.943 { 00:04:47.943 "subsystem": "nbd", 00:04:47.943 "config": [] 00:04:47.943 }, 00:04:47.943 { 00:04:47.943 "subsystem": "nvmf", 00:04:47.943 "config": [ 00:04:47.943 { 00:04:47.943 "method": "nvmf_set_config", 00:04:47.943 "params": { 00:04:47.943 "discovery_filter": "match_any", 00:04:47.943 "admin_cmd_passthru": { 00:04:47.943 "identify_ctrlr": false 00:04:47.943 }, 00:04:47.943 "dhchap_digests": [ 00:04:47.943 "sha256", 00:04:47.943 "sha384", 00:04:47.943 "sha512" 00:04:47.943 ], 00:04:47.943 "dhchap_dhgroups": [ 00:04:47.943 "null", 00:04:47.943 "ffdhe2048", 00:04:47.943 "ffdhe3072", 00:04:47.943 "ffdhe4096", 00:04:47.944 "ffdhe6144", 00:04:47.944 "ffdhe8192" 00:04:47.944 ] 00:04:47.944 } 00:04:47.944 }, 00:04:47.944 { 00:04:47.944 "method": "nvmf_set_max_subsystems", 00:04:47.944 "params": { 00:04:47.944 "max_subsystems": 1024 00:04:47.944 } 00:04:47.944 }, 00:04:47.944 { 00:04:47.944 "method": "nvmf_set_crdt", 00:04:47.944 "params": { 00:04:47.944 "crdt1": 0, 00:04:47.944 "crdt2": 0, 00:04:47.944 "crdt3": 0 00:04:47.944 } 00:04:47.944 }, 00:04:47.944 { 00:04:47.944 "method": "nvmf_create_transport", 00:04:47.944 "params": { 00:04:47.944 "trtype": "TCP", 00:04:47.944 "max_queue_depth": 128, 00:04:47.944 "max_io_qpairs_per_ctrlr": 127, 00:04:47.944 "in_capsule_data_size": 4096, 00:04:47.944 "max_io_size": 131072, 00:04:47.944 "io_unit_size": 131072, 00:04:47.944 "max_aq_depth": 128, 00:04:47.944 "num_shared_buffers": 511, 00:04:47.944 "buf_cache_size": 4294967295, 00:04:47.944 "dif_insert_or_strip": false, 00:04:47.944 "zcopy": false, 00:04:47.944 "c2h_success": true, 00:04:47.944 "sock_priority": 0, 00:04:47.944 "abort_timeout_sec": 1, 00:04:47.944 "ack_timeout": 0, 00:04:47.944 "data_wr_pool_size": 0 00:04:47.944 } 00:04:47.944 } 00:04:47.944 ] 00:04:47.944 }, 00:04:47.944 { 00:04:47.944 "subsystem": "iscsi", 00:04:47.944 "config": [ 00:04:47.944 { 00:04:47.944 "method": "iscsi_set_options", 00:04:47.944 "params": { 00:04:47.944 "node_base": "iqn.2016-06.io.spdk", 00:04:47.944 "max_sessions": 128, 00:04:47.944 "max_connections_per_session": 2, 00:04:47.944 "max_queue_depth": 64, 00:04:47.944 "default_time2wait": 2, 00:04:47.944 "default_time2retain": 20, 00:04:47.944 "first_burst_length": 8192, 00:04:47.944 "immediate_data": true, 00:04:47.944 "allow_duplicated_isid": false, 00:04:47.944 "error_recovery_level": 0, 00:04:47.944 "nop_timeout": 60, 00:04:47.944 "nop_in_interval": 30, 00:04:47.944 "disable_chap": false, 00:04:47.944 "require_chap": false, 00:04:47.944 "mutual_chap": false, 00:04:47.944 "chap_group": 0, 00:04:47.944 "max_large_datain_per_connection": 64, 00:04:47.944 "max_r2t_per_connection": 4, 00:04:47.944 "pdu_pool_size": 36864, 00:04:47.944 "immediate_data_pool_size": 16384, 00:04:47.944 "data_out_pool_size": 2048 00:04:47.944 } 00:04:47.944 } 00:04:47.944 ] 00:04:47.944 } 00:04:47.944 ] 00:04:47.944 } 00:04:47.944 08:15:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:47.944 08:15:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57175 00:04:47.944 08:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57175 ']' 00:04:47.944 08:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57175 00:04:47.944 08:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:47.944 08:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:47.944 08:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57175 00:04:47.944 08:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:47.944 killing process with pid 57175 00:04:47.944 08:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:47.944 08:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57175' 00:04:47.944 08:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57175 00:04:47.944 08:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57175 00:04:48.513 08:15:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57203 00:04:48.513 08:15:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:48.513 08:15:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:53.802 08:15:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57203 00:04:53.802 08:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57203 ']' 00:04:53.802 08:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57203 00:04:53.802 08:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:53.802 08:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:53.802 08:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57203 00:04:53.803 08:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:53.803 08:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:53.803 08:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57203' 00:04:53.803 killing process with pid 57203 00:04:53.803 08:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57203 00:04:53.803 08:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57203 00:04:54.061 08:15:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:54.061 08:15:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:54.061 00:04:54.061 real 0m7.459s 00:04:54.061 user 0m7.087s 00:04:54.061 sys 0m0.823s 00:04:54.061 08:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.061 08:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.061 ************************************ 00:04:54.061 END TEST skip_rpc_with_json 00:04:54.061 ************************************ 00:04:54.061 08:15:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:54.061 08:15:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.061 08:15:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.061 08:15:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.061 ************************************ 00:04:54.061 START TEST skip_rpc_with_delay 00:04:54.061 ************************************ 00:04:54.061 08:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:54.061 08:15:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.061 08:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:54.061 08:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.061 08:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.061 08:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.061 08:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.061 08:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.061 08:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.061 08:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.061 08:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.061 08:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:54.062 08:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.320 [2024-10-15 08:15:55.815171] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:54.320 08:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:54.320 08:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:54.320 08:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:54.320 08:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:54.320 00:04:54.320 real 0m0.099s 00:04:54.320 user 0m0.055s 00:04:54.320 sys 0m0.043s 00:04:54.320 08:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.320 08:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:54.320 ************************************ 00:04:54.320 END TEST skip_rpc_with_delay 00:04:54.320 ************************************ 00:04:54.320 08:15:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:54.320 08:15:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:54.320 08:15:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:54.320 08:15:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.320 08:15:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.320 08:15:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.320 ************************************ 00:04:54.320 START TEST exit_on_failed_rpc_init 00:04:54.320 ************************************ 00:04:54.320 08:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:54.320 08:15:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57318 00:04:54.320 08:15:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.320 08:15:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57318 00:04:54.320 08:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57318 ']' 00:04:54.320 08:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.320 08:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:54.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.320 08:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.320 08:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:54.320 08:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:54.320 [2024-10-15 08:15:55.957633] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:04:54.320 [2024-10-15 08:15:55.957787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57318 ] 00:04:54.579 [2024-10-15 08:15:56.094185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.579 [2024-10-15 08:15:56.170930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.579 [2024-10-15 08:15:56.279491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:55.515 08:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:55.515 08:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:55.515 08:15:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.515 08:15:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.515 08:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:55.515 08:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.515 08:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.515 08:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.515 08:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.515 08:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.515 08:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.515 08:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.515 08:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.515 08:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:55.515 08:15:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.515 [2024-10-15 08:15:57.070750] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:04:55.515 [2024-10-15 08:15:57.070877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57336 ] 00:04:55.515 [2024-10-15 08:15:57.209472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.774 [2024-10-15 08:15:57.293499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.774 [2024-10-15 08:15:57.293611] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:55.775 [2024-10-15 08:15:57.293625] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:55.775 [2024-10-15 08:15:57.293638] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:55.775 08:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:55.775 08:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:55.775 08:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:55.775 08:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:55.775 08:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:55.775 08:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:55.775 08:15:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:55.775 08:15:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57318 00:04:55.775 08:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57318 ']' 00:04:55.775 08:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57318 00:04:55.775 08:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:55.775 08:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:55.775 08:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57318 00:04:55.775 08:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:55.775 08:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:55.775 killing process with pid 57318 00:04:55.775 08:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57318' 00:04:55.775 08:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57318 00:04:55.775 08:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57318 00:04:56.342 00:04:56.342 real 0m2.076s 00:04:56.342 user 0m2.334s 00:04:56.342 sys 0m0.542s 00:04:56.342 08:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.342 08:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.342 ************************************ 00:04:56.342 END TEST exit_on_failed_rpc_init 00:04:56.342 ************************************ 00:04:56.342 08:15:58 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:56.342 00:04:56.342 real 0m15.639s 00:04:56.342 user 0m14.805s 00:04:56.342 sys 0m1.981s 00:04:56.342 08:15:58 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.342 08:15:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.342 ************************************ 00:04:56.342 END TEST skip_rpc 00:04:56.342 ************************************ 00:04:56.342 08:15:58 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:56.342 08:15:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.342 08:15:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.342 08:15:58 -- common/autotest_common.sh@10 -- # set +x 00:04:56.342 ************************************ 00:04:56.342 START TEST rpc_client 00:04:56.342 ************************************ 00:04:56.342 08:15:58 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:56.601 * Looking for test storage... 00:04:56.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:56.601 08:15:58 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:56.601 08:15:58 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:56.601 08:15:58 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:56.601 08:15:58 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.601 08:15:58 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:56.601 08:15:58 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.601 08:15:58 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:56.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.601 --rc genhtml_branch_coverage=1 00:04:56.601 --rc genhtml_function_coverage=1 00:04:56.601 --rc genhtml_legend=1 00:04:56.601 --rc geninfo_all_blocks=1 00:04:56.601 --rc geninfo_unexecuted_blocks=1 00:04:56.602 00:04:56.602 ' 00:04:56.602 08:15:58 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:56.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.602 --rc genhtml_branch_coverage=1 00:04:56.602 --rc genhtml_function_coverage=1 00:04:56.602 --rc genhtml_legend=1 00:04:56.602 --rc geninfo_all_blocks=1 00:04:56.602 --rc geninfo_unexecuted_blocks=1 00:04:56.602 00:04:56.602 ' 00:04:56.602 08:15:58 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:56.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.602 --rc genhtml_branch_coverage=1 00:04:56.602 --rc genhtml_function_coverage=1 00:04:56.602 --rc genhtml_legend=1 00:04:56.602 --rc geninfo_all_blocks=1 00:04:56.602 --rc geninfo_unexecuted_blocks=1 00:04:56.602 00:04:56.602 ' 00:04:56.602 08:15:58 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:56.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.602 --rc genhtml_branch_coverage=1 00:04:56.602 --rc genhtml_function_coverage=1 00:04:56.602 --rc genhtml_legend=1 00:04:56.602 --rc geninfo_all_blocks=1 00:04:56.602 --rc geninfo_unexecuted_blocks=1 00:04:56.602 00:04:56.602 ' 00:04:56.602 08:15:58 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:56.602 OK 00:04:56.602 08:15:58 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:56.602 00:04:56.602 real 0m0.222s 00:04:56.602 user 0m0.128s 00:04:56.602 sys 0m0.102s 00:04:56.602 08:15:58 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.602 08:15:58 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:56.602 ************************************ 00:04:56.602 END TEST rpc_client 00:04:56.602 ************************************ 00:04:56.862 08:15:58 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:56.862 08:15:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.862 08:15:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.862 08:15:58 -- common/autotest_common.sh@10 -- # set +x 00:04:56.862 ************************************ 00:04:56.862 START TEST json_config 00:04:56.862 ************************************ 00:04:56.862 08:15:58 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:56.862 08:15:58 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:56.862 08:15:58 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:56.862 08:15:58 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:56.862 08:15:58 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:56.862 08:15:58 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.862 08:15:58 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.862 08:15:58 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.862 08:15:58 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.862 08:15:58 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.862 08:15:58 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.862 08:15:58 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.862 08:15:58 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.862 08:15:58 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.862 08:15:58 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.862 08:15:58 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.862 08:15:58 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:56.862 08:15:58 json_config -- scripts/common.sh@345 -- # : 1 00:04:56.862 08:15:58 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.862 08:15:58 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.862 08:15:58 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:56.862 08:15:58 json_config -- scripts/common.sh@353 -- # local d=1 00:04:56.862 08:15:58 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.862 08:15:58 json_config -- scripts/common.sh@355 -- # echo 1 00:04:56.862 08:15:58 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.862 08:15:58 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:56.862 08:15:58 json_config -- scripts/common.sh@353 -- # local d=2 00:04:56.862 08:15:58 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.862 08:15:58 json_config -- scripts/common.sh@355 -- # echo 2 00:04:56.862 08:15:58 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.862 08:15:58 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.862 08:15:58 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.862 08:15:58 json_config -- scripts/common.sh@368 -- # return 0 00:04:56.862 08:15:58 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.862 08:15:58 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:56.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.862 --rc genhtml_branch_coverage=1 00:04:56.862 --rc genhtml_function_coverage=1 00:04:56.862 --rc genhtml_legend=1 00:04:56.862 --rc geninfo_all_blocks=1 00:04:56.862 --rc geninfo_unexecuted_blocks=1 00:04:56.862 00:04:56.862 ' 00:04:56.862 08:15:58 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:56.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.862 --rc genhtml_branch_coverage=1 00:04:56.862 --rc genhtml_function_coverage=1 00:04:56.862 --rc genhtml_legend=1 00:04:56.862 --rc geninfo_all_blocks=1 00:04:56.862 --rc geninfo_unexecuted_blocks=1 00:04:56.862 00:04:56.862 ' 00:04:56.862 08:15:58 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:56.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.862 --rc genhtml_branch_coverage=1 00:04:56.862 --rc genhtml_function_coverage=1 00:04:56.862 --rc genhtml_legend=1 00:04:56.862 --rc geninfo_all_blocks=1 00:04:56.862 --rc geninfo_unexecuted_blocks=1 00:04:56.862 00:04:56.862 ' 00:04:56.862 08:15:58 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:56.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.862 --rc genhtml_branch_coverage=1 00:04:56.862 --rc genhtml_function_coverage=1 00:04:56.862 --rc genhtml_legend=1 00:04:56.862 --rc geninfo_all_blocks=1 00:04:56.862 --rc geninfo_unexecuted_blocks=1 00:04:56.862 00:04:56.862 ' 00:04:56.862 08:15:58 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:56.862 08:15:58 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:56.862 08:15:58 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.862 08:15:58 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.862 08:15:58 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.862 08:15:58 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.862 08:15:58 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.862 08:15:58 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.862 08:15:58 json_config -- paths/export.sh@5 -- # export PATH 00:04:56.862 08:15:58 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@51 -- # : 0 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:56.862 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:56.862 08:15:58 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:56.862 08:15:58 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:56.862 08:15:58 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:56.862 08:15:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:56.862 08:15:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:56.862 08:15:58 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:56.862 08:15:58 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:56.862 08:15:58 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:56.862 08:15:58 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:56.862 08:15:58 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:56.862 08:15:58 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:56.862 08:15:58 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:56.862 08:15:58 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:56.863 08:15:58 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:56.863 08:15:58 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:56.863 08:15:58 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:56.863 INFO: JSON configuration test init 00:04:56.863 08:15:58 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:56.863 08:15:58 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:56.863 08:15:58 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:56.863 08:15:58 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:56.863 08:15:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.863 08:15:58 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:56.863 08:15:58 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:56.863 08:15:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.863 08:15:58 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:56.863 08:15:58 json_config -- json_config/common.sh@9 -- # local app=target 00:04:56.863 08:15:58 json_config -- json_config/common.sh@10 -- # shift 00:04:56.863 08:15:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:56.863 08:15:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:56.863 08:15:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:56.863 08:15:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.863 08:15:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.863 08:15:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57470 00:04:56.863 Waiting for target to run... 00:04:56.863 08:15:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:56.863 08:15:58 json_config -- json_config/common.sh@25 -- # waitforlisten 57470 /var/tmp/spdk_tgt.sock 00:04:56.863 08:15:58 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:56.863 08:15:58 json_config -- common/autotest_common.sh@831 -- # '[' -z 57470 ']' 00:04:56.863 08:15:58 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.863 08:15:58 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:56.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.863 08:15:58 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.863 08:15:58 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:56.863 08:15:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.122 [2024-10-15 08:15:58.605705] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:04:57.122 [2024-10-15 08:15:58.605800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57470 ] 00:04:57.689 [2024-10-15 08:15:59.135794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.689 [2024-10-15 08:15:59.209722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.257 08:15:59 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:58.257 00:04:58.257 08:15:59 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:58.257 08:15:59 json_config -- json_config/common.sh@26 -- # echo '' 00:04:58.257 08:15:59 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:58.257 08:15:59 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:58.257 08:15:59 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:58.257 08:15:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.257 08:15:59 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:58.257 08:15:59 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:58.257 08:15:59 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:58.257 08:15:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.257 08:15:59 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:58.257 08:15:59 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:58.257 08:15:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:58.515 [2024-10-15 08:16:00.160966] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:58.774 08:16:00 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:58.774 08:16:00 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:58.774 08:16:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:58.774 08:16:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.774 08:16:00 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:58.774 08:16:00 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:58.774 08:16:00 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:58.774 08:16:00 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:58.774 08:16:00 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:58.774 08:16:00 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:58.774 08:16:00 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:58.774 08:16:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:59.033 08:16:00 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:59.033 08:16:00 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:59.033 08:16:00 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:59.033 08:16:00 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:59.033 08:16:00 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:59.033 08:16:00 json_config -- json_config/json_config.sh@54 -- # sort 00:04:59.033 08:16:00 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:59.033 08:16:00 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:59.033 08:16:00 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:59.033 08:16:00 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:59.033 08:16:00 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:59.033 08:16:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.033 08:16:00 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:59.033 08:16:00 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:59.033 08:16:00 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:59.033 08:16:00 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:59.033 08:16:00 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:59.033 08:16:00 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:59.033 08:16:00 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:59.033 08:16:00 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:59.033 08:16:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.033 08:16:00 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:59.033 08:16:00 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:59.033 08:16:00 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:59.033 08:16:00 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:59.033 08:16:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:59.610 MallocForNvmf0 00:04:59.610 08:16:01 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:59.610 08:16:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:59.610 MallocForNvmf1 00:04:59.610 08:16:01 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:59.610 08:16:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:00.177 [2024-10-15 08:16:01.603729] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:00.177 08:16:01 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:00.177 08:16:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:00.177 08:16:01 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:00.177 08:16:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:00.435 08:16:02 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:00.435 08:16:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:00.694 08:16:02 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:00.694 08:16:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:00.952 [2024-10-15 08:16:02.636523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:00.952 08:16:02 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:00.952 08:16:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:00.952 08:16:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.211 08:16:02 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:01.211 08:16:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:01.211 08:16:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.211 08:16:02 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:01.211 08:16:02 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:01.211 08:16:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:01.469 MallocBdevForConfigChangeCheck 00:05:01.469 08:16:03 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:01.469 08:16:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:01.469 08:16:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.469 08:16:03 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:01.469 08:16:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:02.036 INFO: shutting down applications... 00:05:02.036 08:16:03 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:02.036 08:16:03 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:02.036 08:16:03 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:02.036 08:16:03 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:02.037 08:16:03 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:02.295 Calling clear_iscsi_subsystem 00:05:02.295 Calling clear_nvmf_subsystem 00:05:02.295 Calling clear_nbd_subsystem 00:05:02.295 Calling clear_ublk_subsystem 00:05:02.295 Calling clear_vhost_blk_subsystem 00:05:02.295 Calling clear_vhost_scsi_subsystem 00:05:02.295 Calling clear_bdev_subsystem 00:05:02.295 08:16:03 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:02.295 08:16:03 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:02.295 08:16:03 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:02.295 08:16:03 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:02.295 08:16:03 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:02.295 08:16:03 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:02.554 08:16:04 json_config -- json_config/json_config.sh@352 -- # break 00:05:02.554 08:16:04 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:02.554 08:16:04 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:02.554 08:16:04 json_config -- json_config/common.sh@31 -- # local app=target 00:05:02.554 08:16:04 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:02.554 08:16:04 json_config -- json_config/common.sh@35 -- # [[ -n 57470 ]] 00:05:02.554 08:16:04 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57470 00:05:02.554 08:16:04 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:02.554 08:16:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.554 08:16:04 json_config -- json_config/common.sh@41 -- # kill -0 57470 00:05:02.554 08:16:04 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.121 08:16:04 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.121 08:16:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.121 08:16:04 json_config -- json_config/common.sh@41 -- # kill -0 57470 00:05:03.121 SPDK target shutdown done 00:05:03.121 INFO: relaunching applications... 00:05:03.121 08:16:04 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:03.121 08:16:04 json_config -- json_config/common.sh@43 -- # break 00:05:03.121 08:16:04 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:03.121 08:16:04 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:03.121 08:16:04 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:03.121 08:16:04 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:03.121 08:16:04 json_config -- json_config/common.sh@9 -- # local app=target 00:05:03.121 08:16:04 json_config -- json_config/common.sh@10 -- # shift 00:05:03.121 08:16:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:03.121 08:16:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:03.121 08:16:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:03.121 08:16:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.121 08:16:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.121 08:16:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57676 00:05:03.121 08:16:04 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:03.121 Waiting for target to run... 00:05:03.121 08:16:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:03.121 08:16:04 json_config -- json_config/common.sh@25 -- # waitforlisten 57676 /var/tmp/spdk_tgt.sock 00:05:03.121 08:16:04 json_config -- common/autotest_common.sh@831 -- # '[' -z 57676 ']' 00:05:03.121 08:16:04 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:03.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:03.121 08:16:04 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:03.121 08:16:04 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:03.121 08:16:04 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:03.121 08:16:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.121 [2024-10-15 08:16:04.844542] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:05:03.121 [2024-10-15 08:16:04.844663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57676 ] 00:05:03.688 [2024-10-15 08:16:05.369413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.947 [2024-10-15 08:16:05.433582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.947 [2024-10-15 08:16:05.575588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:04.206 [2024-10-15 08:16:05.806617] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:04.206 [2024-10-15 08:16:05.838763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:04.206 00:05:04.206 INFO: Checking if target configuration is the same... 00:05:04.206 08:16:05 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.206 08:16:05 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:04.206 08:16:05 json_config -- json_config/common.sh@26 -- # echo '' 00:05:04.206 08:16:05 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:04.206 08:16:05 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:04.206 08:16:05 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:04.206 08:16:05 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:04.206 08:16:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:04.206 + '[' 2 -ne 2 ']' 00:05:04.206 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:04.206 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:04.206 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:04.206 +++ basename /dev/fd/62 00:05:04.206 ++ mktemp /tmp/62.XXX 00:05:04.206 + tmp_file_1=/tmp/62.59V 00:05:04.206 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:04.206 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:04.206 + tmp_file_2=/tmp/spdk_tgt_config.json.Q2S 00:05:04.206 + ret=0 00:05:04.206 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:04.773 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:04.774 + diff -u /tmp/62.59V /tmp/spdk_tgt_config.json.Q2S 00:05:04.774 INFO: JSON config files are the same 00:05:04.774 + echo 'INFO: JSON config files are the same' 00:05:04.774 + rm /tmp/62.59V /tmp/spdk_tgt_config.json.Q2S 00:05:04.774 + exit 0 00:05:04.774 INFO: changing configuration and checking if this can be detected... 00:05:04.774 08:16:06 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:04.774 08:16:06 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:04.774 08:16:06 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:04.774 08:16:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:05.033 08:16:06 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:05.033 08:16:06 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:05.033 08:16:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:05.033 + '[' 2 -ne 2 ']' 00:05:05.033 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:05.033 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:05.033 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:05.033 +++ basename /dev/fd/62 00:05:05.033 ++ mktemp /tmp/62.XXX 00:05:05.033 + tmp_file_1=/tmp/62.rU0 00:05:05.033 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:05.033 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:05.033 + tmp_file_2=/tmp/spdk_tgt_config.json.IaP 00:05:05.033 + ret=0 00:05:05.033 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:05.600 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:05.600 + diff -u /tmp/62.rU0 /tmp/spdk_tgt_config.json.IaP 00:05:05.600 + ret=1 00:05:05.600 + echo '=== Start of file: /tmp/62.rU0 ===' 00:05:05.600 + cat /tmp/62.rU0 00:05:05.600 + echo '=== End of file: /tmp/62.rU0 ===' 00:05:05.600 + echo '' 00:05:05.600 + echo '=== Start of file: /tmp/spdk_tgt_config.json.IaP ===' 00:05:05.601 + cat /tmp/spdk_tgt_config.json.IaP 00:05:05.601 + echo '=== End of file: /tmp/spdk_tgt_config.json.IaP ===' 00:05:05.601 + echo '' 00:05:05.601 + rm /tmp/62.rU0 /tmp/spdk_tgt_config.json.IaP 00:05:05.601 + exit 1 00:05:05.601 INFO: configuration change detected. 00:05:05.601 08:16:07 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:05.601 08:16:07 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:05.601 08:16:07 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:05.601 08:16:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.601 08:16:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.601 08:16:07 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:05.601 08:16:07 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:05.601 08:16:07 json_config -- json_config/json_config.sh@324 -- # [[ -n 57676 ]] 00:05:05.601 08:16:07 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:05.601 08:16:07 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:05.601 08:16:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.601 08:16:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.601 08:16:07 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:05.601 08:16:07 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:05.601 08:16:07 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:05.601 08:16:07 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:05.601 08:16:07 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:05.601 08:16:07 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:05.601 08:16:07 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:05.601 08:16:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.601 08:16:07 json_config -- json_config/json_config.sh@330 -- # killprocess 57676 00:05:05.601 08:16:07 json_config -- common/autotest_common.sh@950 -- # '[' -z 57676 ']' 00:05:05.601 08:16:07 json_config -- common/autotest_common.sh@954 -- # kill -0 57676 00:05:05.601 08:16:07 json_config -- common/autotest_common.sh@955 -- # uname 00:05:05.601 08:16:07 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:05.601 08:16:07 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57676 00:05:05.601 killing process with pid 57676 00:05:05.601 08:16:07 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:05.601 08:16:07 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:05.601 08:16:07 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57676' 00:05:05.601 08:16:07 json_config -- common/autotest_common.sh@969 -- # kill 57676 00:05:05.601 08:16:07 json_config -- common/autotest_common.sh@974 -- # wait 57676 00:05:06.169 08:16:07 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:06.169 08:16:07 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:06.169 08:16:07 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:06.169 08:16:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.169 INFO: Success 00:05:06.169 08:16:07 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:06.169 08:16:07 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:06.169 00:05:06.169 real 0m9.349s 00:05:06.169 user 0m13.399s 00:05:06.169 sys 0m2.083s 00:05:06.169 ************************************ 00:05:06.169 END TEST json_config 00:05:06.169 ************************************ 00:05:06.169 08:16:07 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.169 08:16:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.169 08:16:07 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:06.169 08:16:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.169 08:16:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.169 08:16:07 -- common/autotest_common.sh@10 -- # set +x 00:05:06.169 ************************************ 00:05:06.169 START TEST json_config_extra_key 00:05:06.169 ************************************ 00:05:06.169 08:16:07 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:06.169 08:16:07 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:06.169 08:16:07 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:06.169 08:16:07 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:06.169 08:16:07 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.169 08:16:07 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:06.169 08:16:07 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.169 08:16:07 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:06.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.169 --rc genhtml_branch_coverage=1 00:05:06.169 --rc genhtml_function_coverage=1 00:05:06.169 --rc genhtml_legend=1 00:05:06.169 --rc geninfo_all_blocks=1 00:05:06.169 --rc geninfo_unexecuted_blocks=1 00:05:06.169 00:05:06.169 ' 00:05:06.169 08:16:07 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:06.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.169 --rc genhtml_branch_coverage=1 00:05:06.169 --rc genhtml_function_coverage=1 00:05:06.169 --rc genhtml_legend=1 00:05:06.169 --rc geninfo_all_blocks=1 00:05:06.169 --rc geninfo_unexecuted_blocks=1 00:05:06.169 00:05:06.169 ' 00:05:06.169 08:16:07 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:06.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.169 --rc genhtml_branch_coverage=1 00:05:06.169 --rc genhtml_function_coverage=1 00:05:06.169 --rc genhtml_legend=1 00:05:06.169 --rc geninfo_all_blocks=1 00:05:06.169 --rc geninfo_unexecuted_blocks=1 00:05:06.169 00:05:06.169 ' 00:05:06.169 08:16:07 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:06.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.169 --rc genhtml_branch_coverage=1 00:05:06.169 --rc genhtml_function_coverage=1 00:05:06.169 --rc genhtml_legend=1 00:05:06.169 --rc geninfo_all_blocks=1 00:05:06.169 --rc geninfo_unexecuted_blocks=1 00:05:06.169 00:05:06.169 ' 00:05:06.169 08:16:07 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:06.169 08:16:07 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:06.429 08:16:07 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:06.429 08:16:07 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.429 08:16:07 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.429 08:16:07 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.429 08:16:07 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.429 08:16:07 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.429 08:16:07 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.429 08:16:07 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:06.429 08:16:07 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:06.429 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:06.429 08:16:07 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:06.429 08:16:07 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:06.429 08:16:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:06.429 08:16:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:06.429 08:16:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:06.429 08:16:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:06.429 08:16:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:06.429 08:16:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:06.429 08:16:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:06.429 08:16:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:06.429 INFO: launching applications... 00:05:06.429 08:16:07 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:06.429 08:16:07 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:06.429 08:16:07 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:06.429 08:16:07 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:06.429 08:16:07 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:06.429 08:16:07 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:06.429 08:16:07 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:06.429 08:16:07 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:06.429 08:16:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.429 08:16:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.429 08:16:07 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57833 00:05:06.429 08:16:07 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:06.429 08:16:07 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:06.429 Waiting for target to run... 00:05:06.429 08:16:07 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57833 /var/tmp/spdk_tgt.sock 00:05:06.429 08:16:07 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57833 ']' 00:05:06.429 08:16:07 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:06.429 08:16:07 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:06.429 08:16:07 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:06.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:06.429 08:16:07 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:06.429 08:16:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:06.429 [2024-10-15 08:16:07.988040] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:05:06.429 [2024-10-15 08:16:07.988512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57833 ] 00:05:06.995 [2024-10-15 08:16:08.519412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.995 [2024-10-15 08:16:08.587106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.995 [2024-10-15 08:16:08.623834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:07.582 08:16:09 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:07.582 00:05:07.582 08:16:09 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:07.582 08:16:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:07.582 INFO: shutting down applications... 00:05:07.582 08:16:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:07.582 08:16:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:07.582 08:16:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:07.582 08:16:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:07.582 08:16:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57833 ]] 00:05:07.582 08:16:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57833 00:05:07.582 08:16:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:07.582 08:16:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.582 08:16:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57833 00:05:07.582 08:16:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:08.149 08:16:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:08.149 08:16:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.149 08:16:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57833 00:05:08.149 08:16:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:08.408 08:16:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:08.408 08:16:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.408 08:16:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57833 00:05:08.408 08:16:10 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:08.408 08:16:10 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:08.408 08:16:10 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:08.408 SPDK target shutdown done 00:05:08.408 Success 00:05:08.408 08:16:10 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:08.408 08:16:10 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:08.408 00:05:08.408 real 0m2.346s 00:05:08.408 user 0m1.900s 00:05:08.408 sys 0m0.571s 00:05:08.408 ************************************ 00:05:08.408 END TEST json_config_extra_key 00:05:08.408 ************************************ 00:05:08.408 08:16:10 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.408 08:16:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:08.408 08:16:10 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:08.408 08:16:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.408 08:16:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.408 08:16:10 -- common/autotest_common.sh@10 -- # set +x 00:05:08.666 ************************************ 00:05:08.666 START TEST alias_rpc 00:05:08.666 ************************************ 00:05:08.666 08:16:10 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:08.666 * Looking for test storage... 00:05:08.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:08.666 08:16:10 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:08.666 08:16:10 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:08.666 08:16:10 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:08.666 08:16:10 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.666 08:16:10 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:08.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.666 08:16:10 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.666 08:16:10 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:08.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.666 --rc genhtml_branch_coverage=1 00:05:08.666 --rc genhtml_function_coverage=1 00:05:08.666 --rc genhtml_legend=1 00:05:08.666 --rc geninfo_all_blocks=1 00:05:08.666 --rc geninfo_unexecuted_blocks=1 00:05:08.666 00:05:08.666 ' 00:05:08.666 08:16:10 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:08.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.666 --rc genhtml_branch_coverage=1 00:05:08.666 --rc genhtml_function_coverage=1 00:05:08.666 --rc genhtml_legend=1 00:05:08.666 --rc geninfo_all_blocks=1 00:05:08.666 --rc geninfo_unexecuted_blocks=1 00:05:08.666 00:05:08.666 ' 00:05:08.666 08:16:10 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:08.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.666 --rc genhtml_branch_coverage=1 00:05:08.666 --rc genhtml_function_coverage=1 00:05:08.666 --rc genhtml_legend=1 00:05:08.666 --rc geninfo_all_blocks=1 00:05:08.667 --rc geninfo_unexecuted_blocks=1 00:05:08.667 00:05:08.667 ' 00:05:08.667 08:16:10 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:08.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.667 --rc genhtml_branch_coverage=1 00:05:08.667 --rc genhtml_function_coverage=1 00:05:08.667 --rc genhtml_legend=1 00:05:08.667 --rc geninfo_all_blocks=1 00:05:08.667 --rc geninfo_unexecuted_blocks=1 00:05:08.667 00:05:08.667 ' 00:05:08.667 08:16:10 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:08.667 08:16:10 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57918 00:05:08.667 08:16:10 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57918 00:05:08.667 08:16:10 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.667 08:16:10 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57918 ']' 00:05:08.667 08:16:10 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.667 08:16:10 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.667 08:16:10 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.667 08:16:10 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.667 08:16:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.925 [2024-10-15 08:16:10.427285] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:05:08.925 [2024-10-15 08:16:10.427935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57918 ] 00:05:08.925 [2024-10-15 08:16:10.580693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.184 [2024-10-15 08:16:10.670450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.184 [2024-10-15 08:16:10.774758] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:10.120 08:16:11 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.120 08:16:11 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:10.120 08:16:11 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:10.120 08:16:11 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57918 00:05:10.120 08:16:11 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57918 ']' 00:05:10.120 08:16:11 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57918 00:05:10.120 08:16:11 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:10.120 08:16:11 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.120 08:16:11 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57918 00:05:10.379 killing process with pid 57918 00:05:10.379 08:16:11 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.379 08:16:11 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.379 08:16:11 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57918' 00:05:10.379 08:16:11 alias_rpc -- common/autotest_common.sh@969 -- # kill 57918 00:05:10.379 08:16:11 alias_rpc -- common/autotest_common.sh@974 -- # wait 57918 00:05:10.947 ************************************ 00:05:10.947 END TEST alias_rpc 00:05:10.947 ************************************ 00:05:10.947 00:05:10.947 real 0m2.298s 00:05:10.947 user 0m2.565s 00:05:10.947 sys 0m0.593s 00:05:10.947 08:16:12 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.947 08:16:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.947 08:16:12 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:10.947 08:16:12 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:10.947 08:16:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.947 08:16:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.947 08:16:12 -- common/autotest_common.sh@10 -- # set +x 00:05:10.947 ************************************ 00:05:10.947 START TEST spdkcli_tcp 00:05:10.947 ************************************ 00:05:10.947 08:16:12 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:10.947 * Looking for test storage... 00:05:10.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:10.947 08:16:12 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:10.947 08:16:12 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:10.947 08:16:12 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:11.206 08:16:12 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.206 08:16:12 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:11.206 08:16:12 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.206 08:16:12 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:11.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.206 --rc genhtml_branch_coverage=1 00:05:11.206 --rc genhtml_function_coverage=1 00:05:11.206 --rc genhtml_legend=1 00:05:11.206 --rc geninfo_all_blocks=1 00:05:11.206 --rc geninfo_unexecuted_blocks=1 00:05:11.206 00:05:11.206 ' 00:05:11.206 08:16:12 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:11.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.206 --rc genhtml_branch_coverage=1 00:05:11.206 --rc genhtml_function_coverage=1 00:05:11.206 --rc genhtml_legend=1 00:05:11.206 --rc geninfo_all_blocks=1 00:05:11.206 --rc geninfo_unexecuted_blocks=1 00:05:11.206 00:05:11.206 ' 00:05:11.206 08:16:12 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:11.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.206 --rc genhtml_branch_coverage=1 00:05:11.206 --rc genhtml_function_coverage=1 00:05:11.206 --rc genhtml_legend=1 00:05:11.206 --rc geninfo_all_blocks=1 00:05:11.206 --rc geninfo_unexecuted_blocks=1 00:05:11.206 00:05:11.206 ' 00:05:11.206 08:16:12 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:11.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.206 --rc genhtml_branch_coverage=1 00:05:11.206 --rc genhtml_function_coverage=1 00:05:11.206 --rc genhtml_legend=1 00:05:11.206 --rc geninfo_all_blocks=1 00:05:11.206 --rc geninfo_unexecuted_blocks=1 00:05:11.206 00:05:11.206 ' 00:05:11.206 08:16:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:11.206 08:16:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:11.206 08:16:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:11.206 08:16:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:11.206 08:16:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:11.206 08:16:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:11.206 08:16:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:11.206 08:16:12 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:11.206 08:16:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.207 08:16:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58002 00:05:11.207 08:16:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:11.207 08:16:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58002 00:05:11.207 08:16:12 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 58002 ']' 00:05:11.207 08:16:12 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.207 08:16:12 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.207 08:16:12 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.207 08:16:12 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.207 08:16:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.207 [2024-10-15 08:16:12.766142] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:05:11.207 [2024-10-15 08:16:12.766251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58002 ] 00:05:11.207 [2024-10-15 08:16:12.905767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.466 [2024-10-15 08:16:12.986759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.466 [2024-10-15 08:16:12.986772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.466 [2024-10-15 08:16:13.080083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:11.725 08:16:13 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.725 08:16:13 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:11.725 08:16:13 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:11.725 08:16:13 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58017 00:05:11.725 08:16:13 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:11.985 [ 00:05:11.985 "bdev_malloc_delete", 00:05:11.985 "bdev_malloc_create", 00:05:11.985 "bdev_null_resize", 00:05:11.985 "bdev_null_delete", 00:05:11.985 "bdev_null_create", 00:05:11.985 "bdev_nvme_cuse_unregister", 00:05:11.985 "bdev_nvme_cuse_register", 00:05:11.985 "bdev_opal_new_user", 00:05:11.985 "bdev_opal_set_lock_state", 00:05:11.985 "bdev_opal_delete", 00:05:11.985 "bdev_opal_get_info", 00:05:11.985 "bdev_opal_create", 00:05:11.985 "bdev_nvme_opal_revert", 00:05:11.985 "bdev_nvme_opal_init", 00:05:11.985 "bdev_nvme_send_cmd", 00:05:11.985 "bdev_nvme_set_keys", 00:05:11.985 "bdev_nvme_get_path_iostat", 00:05:11.985 "bdev_nvme_get_mdns_discovery_info", 00:05:11.985 "bdev_nvme_stop_mdns_discovery", 00:05:11.985 "bdev_nvme_start_mdns_discovery", 00:05:11.985 "bdev_nvme_set_multipath_policy", 00:05:11.985 "bdev_nvme_set_preferred_path", 00:05:11.985 "bdev_nvme_get_io_paths", 00:05:11.985 "bdev_nvme_remove_error_injection", 00:05:11.985 "bdev_nvme_add_error_injection", 00:05:11.985 "bdev_nvme_get_discovery_info", 00:05:11.985 "bdev_nvme_stop_discovery", 00:05:11.985 "bdev_nvme_start_discovery", 00:05:11.985 "bdev_nvme_get_controller_health_info", 00:05:11.985 "bdev_nvme_disable_controller", 00:05:11.985 "bdev_nvme_enable_controller", 00:05:11.985 "bdev_nvme_reset_controller", 00:05:11.985 "bdev_nvme_get_transport_statistics", 00:05:11.985 "bdev_nvme_apply_firmware", 00:05:11.985 "bdev_nvme_detach_controller", 00:05:11.985 "bdev_nvme_get_controllers", 00:05:11.985 "bdev_nvme_attach_controller", 00:05:11.985 "bdev_nvme_set_hotplug", 00:05:11.985 "bdev_nvme_set_options", 00:05:11.985 "bdev_passthru_delete", 00:05:11.985 "bdev_passthru_create", 00:05:11.985 "bdev_lvol_set_parent_bdev", 00:05:11.985 "bdev_lvol_set_parent", 00:05:11.985 "bdev_lvol_check_shallow_copy", 00:05:11.985 "bdev_lvol_start_shallow_copy", 00:05:11.985 "bdev_lvol_grow_lvstore", 00:05:11.985 "bdev_lvol_get_lvols", 00:05:11.985 "bdev_lvol_get_lvstores", 00:05:11.985 "bdev_lvol_delete", 00:05:11.985 "bdev_lvol_set_read_only", 00:05:11.985 "bdev_lvol_resize", 00:05:11.985 "bdev_lvol_decouple_parent", 00:05:11.985 "bdev_lvol_inflate", 00:05:11.985 "bdev_lvol_rename", 00:05:11.985 "bdev_lvol_clone_bdev", 00:05:11.985 "bdev_lvol_clone", 00:05:11.985 "bdev_lvol_snapshot", 00:05:11.985 "bdev_lvol_create", 00:05:11.985 "bdev_lvol_delete_lvstore", 00:05:11.985 "bdev_lvol_rename_lvstore", 00:05:11.985 "bdev_lvol_create_lvstore", 00:05:11.985 "bdev_raid_set_options", 00:05:11.985 "bdev_raid_remove_base_bdev", 00:05:11.985 "bdev_raid_add_base_bdev", 00:05:11.985 "bdev_raid_delete", 00:05:11.985 "bdev_raid_create", 00:05:11.985 "bdev_raid_get_bdevs", 00:05:11.985 "bdev_error_inject_error", 00:05:11.985 "bdev_error_delete", 00:05:11.985 "bdev_error_create", 00:05:11.985 "bdev_split_delete", 00:05:11.985 "bdev_split_create", 00:05:11.985 "bdev_delay_delete", 00:05:11.985 "bdev_delay_create", 00:05:11.985 "bdev_delay_update_latency", 00:05:11.985 "bdev_zone_block_delete", 00:05:11.985 "bdev_zone_block_create", 00:05:11.985 "blobfs_create", 00:05:11.985 "blobfs_detect", 00:05:11.985 "blobfs_set_cache_size", 00:05:11.985 "bdev_aio_delete", 00:05:11.985 "bdev_aio_rescan", 00:05:11.985 "bdev_aio_create", 00:05:11.985 "bdev_ftl_set_property", 00:05:11.985 "bdev_ftl_get_properties", 00:05:11.985 "bdev_ftl_get_stats", 00:05:11.985 "bdev_ftl_unmap", 00:05:11.985 "bdev_ftl_unload", 00:05:11.985 "bdev_ftl_delete", 00:05:11.985 "bdev_ftl_load", 00:05:11.985 "bdev_ftl_create", 00:05:11.985 "bdev_virtio_attach_controller", 00:05:11.985 "bdev_virtio_scsi_get_devices", 00:05:11.985 "bdev_virtio_detach_controller", 00:05:11.985 "bdev_virtio_blk_set_hotplug", 00:05:11.985 "bdev_iscsi_delete", 00:05:11.985 "bdev_iscsi_create", 00:05:11.985 "bdev_iscsi_set_options", 00:05:11.985 "bdev_uring_delete", 00:05:11.985 "bdev_uring_rescan", 00:05:11.985 "bdev_uring_create", 00:05:11.985 "accel_error_inject_error", 00:05:11.985 "ioat_scan_accel_module", 00:05:11.985 "dsa_scan_accel_module", 00:05:11.985 "iaa_scan_accel_module", 00:05:11.985 "keyring_file_remove_key", 00:05:11.985 "keyring_file_add_key", 00:05:11.985 "keyring_linux_set_options", 00:05:11.985 "fsdev_aio_delete", 00:05:11.985 "fsdev_aio_create", 00:05:11.985 "iscsi_get_histogram", 00:05:11.985 "iscsi_enable_histogram", 00:05:11.985 "iscsi_set_options", 00:05:11.985 "iscsi_get_auth_groups", 00:05:11.985 "iscsi_auth_group_remove_secret", 00:05:11.985 "iscsi_auth_group_add_secret", 00:05:11.985 "iscsi_delete_auth_group", 00:05:11.985 "iscsi_create_auth_group", 00:05:11.985 "iscsi_set_discovery_auth", 00:05:11.985 "iscsi_get_options", 00:05:11.985 "iscsi_target_node_request_logout", 00:05:11.985 "iscsi_target_node_set_redirect", 00:05:11.985 "iscsi_target_node_set_auth", 00:05:11.985 "iscsi_target_node_add_lun", 00:05:11.985 "iscsi_get_stats", 00:05:11.985 "iscsi_get_connections", 00:05:11.985 "iscsi_portal_group_set_auth", 00:05:11.985 "iscsi_start_portal_group", 00:05:11.985 "iscsi_delete_portal_group", 00:05:11.985 "iscsi_create_portal_group", 00:05:11.985 "iscsi_get_portal_groups", 00:05:11.985 "iscsi_delete_target_node", 00:05:11.986 "iscsi_target_node_remove_pg_ig_maps", 00:05:11.986 "iscsi_target_node_add_pg_ig_maps", 00:05:11.986 "iscsi_create_target_node", 00:05:11.986 "iscsi_get_target_nodes", 00:05:11.986 "iscsi_delete_initiator_group", 00:05:11.986 "iscsi_initiator_group_remove_initiators", 00:05:11.986 "iscsi_initiator_group_add_initiators", 00:05:11.986 "iscsi_create_initiator_group", 00:05:11.986 "iscsi_get_initiator_groups", 00:05:11.986 "nvmf_set_crdt", 00:05:11.986 "nvmf_set_config", 00:05:11.986 "nvmf_set_max_subsystems", 00:05:11.986 "nvmf_stop_mdns_prr", 00:05:11.986 "nvmf_publish_mdns_prr", 00:05:11.986 "nvmf_subsystem_get_listeners", 00:05:11.986 "nvmf_subsystem_get_qpairs", 00:05:11.986 "nvmf_subsystem_get_controllers", 00:05:11.986 "nvmf_get_stats", 00:05:11.986 "nvmf_get_transports", 00:05:11.986 "nvmf_create_transport", 00:05:11.986 "nvmf_get_targets", 00:05:11.986 "nvmf_delete_target", 00:05:11.986 "nvmf_create_target", 00:05:11.986 "nvmf_subsystem_allow_any_host", 00:05:11.986 "nvmf_subsystem_set_keys", 00:05:11.986 "nvmf_subsystem_remove_host", 00:05:11.986 "nvmf_subsystem_add_host", 00:05:11.986 "nvmf_ns_remove_host", 00:05:11.986 "nvmf_ns_add_host", 00:05:11.986 "nvmf_subsystem_remove_ns", 00:05:11.986 "nvmf_subsystem_set_ns_ana_group", 00:05:11.986 "nvmf_subsystem_add_ns", 00:05:11.986 "nvmf_subsystem_listener_set_ana_state", 00:05:11.986 "nvmf_discovery_get_referrals", 00:05:11.986 "nvmf_discovery_remove_referral", 00:05:11.986 "nvmf_discovery_add_referral", 00:05:11.986 "nvmf_subsystem_remove_listener", 00:05:11.986 "nvmf_subsystem_add_listener", 00:05:11.986 "nvmf_delete_subsystem", 00:05:11.986 "nvmf_create_subsystem", 00:05:11.986 "nvmf_get_subsystems", 00:05:11.986 "env_dpdk_get_mem_stats", 00:05:11.986 "nbd_get_disks", 00:05:11.986 "nbd_stop_disk", 00:05:11.986 "nbd_start_disk", 00:05:11.986 "ublk_recover_disk", 00:05:11.986 "ublk_get_disks", 00:05:11.986 "ublk_stop_disk", 00:05:11.986 "ublk_start_disk", 00:05:11.986 "ublk_destroy_target", 00:05:11.986 "ublk_create_target", 00:05:11.986 "virtio_blk_create_transport", 00:05:11.986 "virtio_blk_get_transports", 00:05:11.986 "vhost_controller_set_coalescing", 00:05:11.986 "vhost_get_controllers", 00:05:11.986 "vhost_delete_controller", 00:05:11.986 "vhost_create_blk_controller", 00:05:11.986 "vhost_scsi_controller_remove_target", 00:05:11.986 "vhost_scsi_controller_add_target", 00:05:11.986 "vhost_start_scsi_controller", 00:05:11.986 "vhost_create_scsi_controller", 00:05:11.986 "thread_set_cpumask", 00:05:11.986 "scheduler_set_options", 00:05:11.986 "framework_get_governor", 00:05:11.986 "framework_get_scheduler", 00:05:11.986 "framework_set_scheduler", 00:05:11.986 "framework_get_reactors", 00:05:11.986 "thread_get_io_channels", 00:05:11.986 "thread_get_pollers", 00:05:11.986 "thread_get_stats", 00:05:11.986 "framework_monitor_context_switch", 00:05:11.986 "spdk_kill_instance", 00:05:11.986 "log_enable_timestamps", 00:05:11.986 "log_get_flags", 00:05:11.986 "log_clear_flag", 00:05:11.986 "log_set_flag", 00:05:11.986 "log_get_level", 00:05:11.986 "log_set_level", 00:05:11.986 "log_get_print_level", 00:05:11.986 "log_set_print_level", 00:05:11.986 "framework_enable_cpumask_locks", 00:05:11.986 "framework_disable_cpumask_locks", 00:05:11.986 "framework_wait_init", 00:05:11.986 "framework_start_init", 00:05:11.986 "scsi_get_devices", 00:05:11.986 "bdev_get_histogram", 00:05:11.986 "bdev_enable_histogram", 00:05:11.986 "bdev_set_qos_limit", 00:05:11.986 "bdev_set_qd_sampling_period", 00:05:11.986 "bdev_get_bdevs", 00:05:11.986 "bdev_reset_iostat", 00:05:11.986 "bdev_get_iostat", 00:05:11.986 "bdev_examine", 00:05:11.986 "bdev_wait_for_examine", 00:05:11.986 "bdev_set_options", 00:05:11.986 "accel_get_stats", 00:05:11.986 "accel_set_options", 00:05:11.986 "accel_set_driver", 00:05:11.986 "accel_crypto_key_destroy", 00:05:11.986 "accel_crypto_keys_get", 00:05:11.986 "accel_crypto_key_create", 00:05:11.986 "accel_assign_opc", 00:05:11.986 "accel_get_module_info", 00:05:11.986 "accel_get_opc_assignments", 00:05:11.986 "vmd_rescan", 00:05:11.986 "vmd_remove_device", 00:05:11.986 "vmd_enable", 00:05:11.986 "sock_get_default_impl", 00:05:11.986 "sock_set_default_impl", 00:05:11.986 "sock_impl_set_options", 00:05:11.986 "sock_impl_get_options", 00:05:11.986 "iobuf_get_stats", 00:05:11.986 "iobuf_set_options", 00:05:11.986 "keyring_get_keys", 00:05:11.986 "framework_get_pci_devices", 00:05:11.986 "framework_get_config", 00:05:11.986 "framework_get_subsystems", 00:05:11.986 "fsdev_set_opts", 00:05:11.986 "fsdev_get_opts", 00:05:11.986 "trace_get_info", 00:05:11.986 "trace_get_tpoint_group_mask", 00:05:11.986 "trace_disable_tpoint_group", 00:05:11.986 "trace_enable_tpoint_group", 00:05:11.986 "trace_clear_tpoint_mask", 00:05:11.986 "trace_set_tpoint_mask", 00:05:11.986 "notify_get_notifications", 00:05:11.986 "notify_get_types", 00:05:11.986 "spdk_get_version", 00:05:11.986 "rpc_get_methods" 00:05:11.986 ] 00:05:11.986 08:16:13 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:11.986 08:16:13 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:11.986 08:16:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.986 08:16:13 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:11.986 08:16:13 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58002 00:05:11.986 08:16:13 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 58002 ']' 00:05:11.986 08:16:13 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 58002 00:05:11.986 08:16:13 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:11.986 08:16:13 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.986 08:16:13 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58002 00:05:11.986 killing process with pid 58002 00:05:11.986 08:16:13 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:11.986 08:16:13 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:11.986 08:16:13 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58002' 00:05:11.986 08:16:13 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 58002 00:05:11.986 08:16:13 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 58002 00:05:12.554 ************************************ 00:05:12.554 END TEST spdkcli_tcp 00:05:12.554 ************************************ 00:05:12.554 00:05:12.554 real 0m1.758s 00:05:12.554 user 0m2.918s 00:05:12.554 sys 0m0.558s 00:05:12.554 08:16:14 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.554 08:16:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.813 08:16:14 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:12.813 08:16:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.813 08:16:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.813 08:16:14 -- common/autotest_common.sh@10 -- # set +x 00:05:12.813 ************************************ 00:05:12.813 START TEST dpdk_mem_utility 00:05:12.813 ************************************ 00:05:12.813 08:16:14 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:12.813 * Looking for test storage... 00:05:12.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:12.813 08:16:14 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:12.813 08:16:14 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:12.813 08:16:14 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:12.813 08:16:14 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.813 08:16:14 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:12.813 08:16:14 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.813 08:16:14 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:12.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.813 --rc genhtml_branch_coverage=1 00:05:12.813 --rc genhtml_function_coverage=1 00:05:12.813 --rc genhtml_legend=1 00:05:12.813 --rc geninfo_all_blocks=1 00:05:12.813 --rc geninfo_unexecuted_blocks=1 00:05:12.813 00:05:12.813 ' 00:05:12.813 08:16:14 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:12.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.813 --rc genhtml_branch_coverage=1 00:05:12.813 --rc genhtml_function_coverage=1 00:05:12.813 --rc genhtml_legend=1 00:05:12.813 --rc geninfo_all_blocks=1 00:05:12.813 --rc geninfo_unexecuted_blocks=1 00:05:12.813 00:05:12.813 ' 00:05:12.813 08:16:14 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:12.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.813 --rc genhtml_branch_coverage=1 00:05:12.813 --rc genhtml_function_coverage=1 00:05:12.813 --rc genhtml_legend=1 00:05:12.813 --rc geninfo_all_blocks=1 00:05:12.813 --rc geninfo_unexecuted_blocks=1 00:05:12.813 00:05:12.813 ' 00:05:12.813 08:16:14 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:12.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.813 --rc genhtml_branch_coverage=1 00:05:12.813 --rc genhtml_function_coverage=1 00:05:12.813 --rc genhtml_legend=1 00:05:12.813 --rc geninfo_all_blocks=1 00:05:12.813 --rc geninfo_unexecuted_blocks=1 00:05:12.813 00:05:12.813 ' 00:05:12.813 08:16:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:12.813 08:16:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58099 00:05:12.813 08:16:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58099 00:05:12.813 08:16:14 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58099 ']' 00:05:12.813 08:16:14 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.813 08:16:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.813 08:16:14 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.813 08:16:14 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.813 08:16:14 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.813 08:16:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:13.073 [2024-10-15 08:16:14.567506] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:05:13.073 [2024-10-15 08:16:14.568482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58099 ] 00:05:13.073 [2024-10-15 08:16:14.707583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.073 [2024-10-15 08:16:14.785818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.331 [2024-10-15 08:16:14.884729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:13.592 08:16:15 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.592 08:16:15 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:13.592 08:16:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:13.592 08:16:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:13.592 08:16:15 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.592 08:16:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:13.592 { 00:05:13.592 "filename": "/tmp/spdk_mem_dump.txt" 00:05:13.592 } 00:05:13.592 08:16:15 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.592 08:16:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:13.592 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:13.592 1 heaps totaling size 810.000000 MiB 00:05:13.592 size: 810.000000 MiB heap id: 0 00:05:13.592 end heaps---------- 00:05:13.592 9 mempools totaling size 595.772034 MiB 00:05:13.592 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:13.592 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:13.592 size: 92.545471 MiB name: bdev_io_58099 00:05:13.592 size: 50.003479 MiB name: msgpool_58099 00:05:13.592 size: 36.509338 MiB name: fsdev_io_58099 00:05:13.592 size: 21.763794 MiB name: PDU_Pool 00:05:13.592 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:13.592 size: 4.133484 MiB name: evtpool_58099 00:05:13.592 size: 0.026123 MiB name: Session_Pool 00:05:13.592 end mempools------- 00:05:13.592 6 memzones totaling size 4.142822 MiB 00:05:13.592 size: 1.000366 MiB name: RG_ring_0_58099 00:05:13.592 size: 1.000366 MiB name: RG_ring_1_58099 00:05:13.592 size: 1.000366 MiB name: RG_ring_4_58099 00:05:13.592 size: 1.000366 MiB name: RG_ring_5_58099 00:05:13.592 size: 0.125366 MiB name: RG_ring_2_58099 00:05:13.592 size: 0.015991 MiB name: RG_ring_3_58099 00:05:13.592 end memzones------- 00:05:13.592 08:16:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:13.592 heap id: 0 total size: 810.000000 MiB number of busy elements: 310 number of free elements: 15 00:05:13.592 list of free elements. size: 10.813782 MiB 00:05:13.592 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:13.592 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:13.592 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:13.592 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:13.592 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:13.592 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:13.592 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:13.592 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:13.592 element at address: 0x20001a600000 with size: 0.567688 MiB 00:05:13.592 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:13.592 element at address: 0x200000c00000 with size: 0.487000 MiB 00:05:13.592 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:13.592 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:13.592 element at address: 0x200027a00000 with size: 0.396301 MiB 00:05:13.592 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:13.592 list of standard malloc elements. size: 199.267334 MiB 00:05:13.593 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:13.593 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:13.593 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:13.593 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:13.593 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:13.593 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:13.593 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:13.593 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:13.593 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:13.593 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:13.593 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:13.593 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20001a691540 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20001a691600 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20001a691780 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20001a691840 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20001a691900 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20001a692080 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20001a692140 with size: 0.000183 MiB 00:05:13.593 element at address: 0x20001a692200 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a692380 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a692440 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a692500 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a692680 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a692740 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a692800 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a692980 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a693040 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a693100 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a693280 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a693340 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a693400 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a693580 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a693640 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a693700 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a693880 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a693940 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a694000 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a694180 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a694240 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a694300 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a694480 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a694540 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a694600 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a694780 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a694840 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a694900 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a695080 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a695140 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a695200 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:13.594 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a65740 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a65800 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6c400 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:13.594 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:13.594 list of memzone associated elements. size: 599.918884 MiB 00:05:13.594 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:13.594 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:13.594 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:13.594 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:13.594 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:13.594 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58099_0 00:05:13.594 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:13.594 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58099_0 00:05:13.594 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:13.594 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58099_0 00:05:13.594 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:13.594 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:13.595 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:13.595 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:13.595 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:13.595 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58099_0 00:05:13.595 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:13.595 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58099 00:05:13.595 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:13.595 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58099 00:05:13.595 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:13.595 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:13.595 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:13.595 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:13.595 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:13.595 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:13.595 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:13.595 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:13.595 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:13.595 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58099 00:05:13.595 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:13.595 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58099 00:05:13.595 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:13.595 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58099 00:05:13.595 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:13.595 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58099 00:05:13.595 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:13.595 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58099 00:05:13.595 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:13.595 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58099 00:05:13.595 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:13.595 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:13.595 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:13.595 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:13.595 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:13.595 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:13.595 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:13.595 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58099 00:05:13.595 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:13.595 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58099 00:05:13.595 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:13.595 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:13.595 element at address: 0x200027a658c0 with size: 0.023743 MiB 00:05:13.595 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:13.595 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:13.595 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58099 00:05:13.595 element at address: 0x200027a6ba00 with size: 0.002441 MiB 00:05:13.595 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:13.595 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:13.595 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58099 00:05:13.595 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:13.595 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58099 00:05:13.595 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:13.595 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58099 00:05:13.595 element at address: 0x200027a6c4c0 with size: 0.000305 MiB 00:05:13.595 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:13.595 08:16:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:13.595 08:16:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58099 00:05:13.595 08:16:15 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58099 ']' 00:05:13.595 08:16:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58099 00:05:13.595 08:16:15 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:13.595 08:16:15 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.595 08:16:15 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58099 00:05:13.856 killing process with pid 58099 00:05:13.856 08:16:15 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.856 08:16:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.856 08:16:15 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58099' 00:05:13.856 08:16:15 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58099 00:05:13.856 08:16:15 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58099 00:05:14.428 00:05:14.428 real 0m1.599s 00:05:14.428 user 0m1.524s 00:05:14.428 sys 0m0.536s 00:05:14.428 08:16:15 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.428 ************************************ 00:05:14.428 END TEST dpdk_mem_utility 00:05:14.428 ************************************ 00:05:14.428 08:16:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:14.428 08:16:15 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:14.428 08:16:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.428 08:16:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.428 08:16:15 -- common/autotest_common.sh@10 -- # set +x 00:05:14.428 ************************************ 00:05:14.428 START TEST event 00:05:14.428 ************************************ 00:05:14.428 08:16:15 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:14.428 * Looking for test storage... 00:05:14.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:14.428 08:16:16 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:14.428 08:16:16 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:14.428 08:16:16 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:14.428 08:16:16 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:14.428 08:16:16 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.428 08:16:16 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.428 08:16:16 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.428 08:16:16 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.428 08:16:16 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.428 08:16:16 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.428 08:16:16 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.428 08:16:16 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.428 08:16:16 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.428 08:16:16 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.428 08:16:16 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.428 08:16:16 event -- scripts/common.sh@344 -- # case "$op" in 00:05:14.428 08:16:16 event -- scripts/common.sh@345 -- # : 1 00:05:14.428 08:16:16 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.428 08:16:16 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.428 08:16:16 event -- scripts/common.sh@365 -- # decimal 1 00:05:14.428 08:16:16 event -- scripts/common.sh@353 -- # local d=1 00:05:14.428 08:16:16 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.428 08:16:16 event -- scripts/common.sh@355 -- # echo 1 00:05:14.428 08:16:16 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.428 08:16:16 event -- scripts/common.sh@366 -- # decimal 2 00:05:14.686 08:16:16 event -- scripts/common.sh@353 -- # local d=2 00:05:14.686 08:16:16 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.686 08:16:16 event -- scripts/common.sh@355 -- # echo 2 00:05:14.686 08:16:16 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.686 08:16:16 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.686 08:16:16 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.686 08:16:16 event -- scripts/common.sh@368 -- # return 0 00:05:14.686 08:16:16 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.686 08:16:16 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:14.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.686 --rc genhtml_branch_coverage=1 00:05:14.686 --rc genhtml_function_coverage=1 00:05:14.686 --rc genhtml_legend=1 00:05:14.686 --rc geninfo_all_blocks=1 00:05:14.686 --rc geninfo_unexecuted_blocks=1 00:05:14.686 00:05:14.686 ' 00:05:14.686 08:16:16 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:14.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.686 --rc genhtml_branch_coverage=1 00:05:14.686 --rc genhtml_function_coverage=1 00:05:14.686 --rc genhtml_legend=1 00:05:14.686 --rc geninfo_all_blocks=1 00:05:14.686 --rc geninfo_unexecuted_blocks=1 00:05:14.686 00:05:14.686 ' 00:05:14.686 08:16:16 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:14.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.686 --rc genhtml_branch_coverage=1 00:05:14.686 --rc genhtml_function_coverage=1 00:05:14.686 --rc genhtml_legend=1 00:05:14.686 --rc geninfo_all_blocks=1 00:05:14.686 --rc geninfo_unexecuted_blocks=1 00:05:14.686 00:05:14.686 ' 00:05:14.686 08:16:16 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:14.686 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.686 --rc genhtml_branch_coverage=1 00:05:14.686 --rc genhtml_function_coverage=1 00:05:14.686 --rc genhtml_legend=1 00:05:14.686 --rc geninfo_all_blocks=1 00:05:14.686 --rc geninfo_unexecuted_blocks=1 00:05:14.686 00:05:14.686 ' 00:05:14.687 08:16:16 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:14.687 08:16:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:14.687 08:16:16 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:14.687 08:16:16 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:14.687 08:16:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.687 08:16:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.687 ************************************ 00:05:14.687 START TEST event_perf 00:05:14.687 ************************************ 00:05:14.687 08:16:16 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:14.687 Running I/O for 1 seconds...[2024-10-15 08:16:16.197262] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:05:14.687 [2024-10-15 08:16:16.197732] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58171 ] 00:05:14.687 [2024-10-15 08:16:16.338929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:14.945 [2024-10-15 08:16:16.424244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.945 [2024-10-15 08:16:16.424460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:14.945 [2024-10-15 08:16:16.424618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.945 [2024-10-15 08:16:16.424618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.883 Running I/O for 1 seconds... 00:05:15.883 lcore 0: 102738 00:05:15.883 lcore 1: 102738 00:05:15.883 lcore 2: 102739 00:05:15.883 lcore 3: 102735 00:05:15.883 done. 00:05:15.883 00:05:15.883 real 0m1.325s 00:05:15.883 user 0m4.128s 00:05:15.883 sys 0m0.069s 00:05:15.883 ************************************ 00:05:15.883 END TEST event_perf 00:05:15.883 ************************************ 00:05:15.883 08:16:17 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.883 08:16:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:15.884 08:16:17 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:15.884 08:16:17 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:15.884 08:16:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.884 08:16:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.884 ************************************ 00:05:15.884 START TEST event_reactor 00:05:15.884 ************************************ 00:05:15.884 08:16:17 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:15.884 [2024-10-15 08:16:17.572078] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:05:15.884 [2024-10-15 08:16:17.572329] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58215 ] 00:05:16.143 [2024-10-15 08:16:17.710471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.143 [2024-10-15 08:16:17.800222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.530 test_start 00:05:17.530 oneshot 00:05:17.530 tick 100 00:05:17.530 tick 100 00:05:17.530 tick 250 00:05:17.530 tick 100 00:05:17.530 tick 100 00:05:17.530 tick 100 00:05:17.530 tick 250 00:05:17.530 tick 500 00:05:17.530 tick 100 00:05:17.530 tick 100 00:05:17.530 tick 250 00:05:17.530 tick 100 00:05:17.530 tick 100 00:05:17.530 test_end 00:05:17.530 00:05:17.530 real 0m1.320s 00:05:17.530 user 0m1.153s 00:05:17.530 sys 0m0.059s 00:05:17.530 08:16:18 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.530 ************************************ 00:05:17.530 END TEST event_reactor 00:05:17.530 ************************************ 00:05:17.530 08:16:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:17.530 08:16:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:17.530 08:16:18 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:17.530 08:16:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.530 08:16:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.530 ************************************ 00:05:17.530 START TEST event_reactor_perf 00:05:17.530 ************************************ 00:05:17.530 08:16:18 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:17.530 [2024-10-15 08:16:18.954926] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:05:17.530 [2024-10-15 08:16:18.955072] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58245 ] 00:05:17.530 [2024-10-15 08:16:19.095569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.530 [2024-10-15 08:16:19.177063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.918 test_start 00:05:18.918 test_end 00:05:18.918 Performance: 374429 events per second 00:05:18.918 00:05:18.918 real 0m1.318s 00:05:18.918 user 0m1.160s 00:05:18.918 sys 0m0.050s 00:05:18.918 08:16:20 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.918 08:16:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:18.918 ************************************ 00:05:18.918 END TEST event_reactor_perf 00:05:18.918 ************************************ 00:05:18.918 08:16:20 event -- event/event.sh@49 -- # uname -s 00:05:18.918 08:16:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:18.918 08:16:20 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:18.918 08:16:20 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.918 08:16:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.918 08:16:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.919 ************************************ 00:05:18.919 START TEST event_scheduler 00:05:18.919 ************************************ 00:05:18.919 08:16:20 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:18.919 * Looking for test storage... 00:05:18.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:18.919 08:16:20 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:18.919 08:16:20 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:18.919 08:16:20 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:18.919 08:16:20 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.919 08:16:20 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:18.919 08:16:20 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.919 08:16:20 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:18.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.919 --rc genhtml_branch_coverage=1 00:05:18.919 --rc genhtml_function_coverage=1 00:05:18.919 --rc genhtml_legend=1 00:05:18.919 --rc geninfo_all_blocks=1 00:05:18.919 --rc geninfo_unexecuted_blocks=1 00:05:18.919 00:05:18.919 ' 00:05:18.919 08:16:20 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:18.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.919 --rc genhtml_branch_coverage=1 00:05:18.919 --rc genhtml_function_coverage=1 00:05:18.919 --rc genhtml_legend=1 00:05:18.919 --rc geninfo_all_blocks=1 00:05:18.919 --rc geninfo_unexecuted_blocks=1 00:05:18.919 00:05:18.919 ' 00:05:18.919 08:16:20 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:18.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.919 --rc genhtml_branch_coverage=1 00:05:18.919 --rc genhtml_function_coverage=1 00:05:18.919 --rc genhtml_legend=1 00:05:18.919 --rc geninfo_all_blocks=1 00:05:18.919 --rc geninfo_unexecuted_blocks=1 00:05:18.919 00:05:18.919 ' 00:05:18.919 08:16:20 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:18.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.919 --rc genhtml_branch_coverage=1 00:05:18.919 --rc genhtml_function_coverage=1 00:05:18.919 --rc genhtml_legend=1 00:05:18.919 --rc geninfo_all_blocks=1 00:05:18.919 --rc geninfo_unexecuted_blocks=1 00:05:18.919 00:05:18.919 ' 00:05:18.919 08:16:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:18.919 08:16:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58320 00:05:18.919 08:16:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.919 08:16:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:18.919 08:16:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58320 00:05:18.919 08:16:20 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58320 ']' 00:05:18.919 08:16:20 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.919 08:16:20 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.919 08:16:20 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.919 08:16:20 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.919 08:16:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.919 [2024-10-15 08:16:20.573880] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:05:18.919 [2024-10-15 08:16:20.574512] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58320 ] 00:05:19.179 [2024-10-15 08:16:20.717330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:19.179 [2024-10-15 08:16:20.811649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.179 [2024-10-15 08:16:20.811755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.179 [2024-10-15 08:16:20.811837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:19.179 [2024-10-15 08:16:20.811838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.179 08:16:20 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.179 08:16:20 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:19.179 08:16:20 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:19.179 08:16:20 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.179 08:16:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.179 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:19.179 POWER: Cannot set governor of lcore 0 to userspace 00:05:19.179 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:19.179 POWER: Cannot set governor of lcore 0 to performance 00:05:19.179 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:19.179 POWER: Cannot set governor of lcore 0 to userspace 00:05:19.179 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:19.179 POWER: Cannot set governor of lcore 0 to userspace 00:05:19.179 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:19.179 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:19.179 POWER: Unable to set Power Management Environment for lcore 0 00:05:19.179 [2024-10-15 08:16:20.879071] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:19.179 [2024-10-15 08:16:20.879223] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:19.179 [2024-10-15 08:16:20.879362] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:19.179 [2024-10-15 08:16:20.879508] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:19.179 [2024-10-15 08:16:20.879626] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:19.179 [2024-10-15 08:16:20.879751] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:19.179 08:16:20 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.179 08:16:20 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:19.179 08:16:20 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.179 08:16:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.438 [2024-10-15 08:16:20.963903] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:19.439 [2024-10-15 08:16:21.015756] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:19.439 08:16:21 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.439 08:16:21 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:19.439 08:16:21 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.439 08:16:21 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.439 08:16:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:19.439 ************************************ 00:05:19.439 START TEST scheduler_create_thread 00:05:19.439 ************************************ 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.439 2 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.439 3 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.439 4 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.439 5 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.439 6 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.439 7 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.439 8 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.439 9 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.439 10 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.439 08:16:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.376 08:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.376 08:16:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:20.376 08:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.376 08:16:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.754 08:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.754 08:16:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:21.754 08:16:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:21.754 08:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.754 08:16:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.730 ************************************ 00:05:22.730 END TEST scheduler_create_thread 00:05:22.730 ************************************ 00:05:22.730 08:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.730 00:05:22.730 real 0m3.377s 00:05:22.730 user 0m0.018s 00:05:22.730 sys 0m0.008s 00:05:22.730 08:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.730 08:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.730 08:16:24 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:22.730 08:16:24 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58320 00:05:22.730 08:16:24 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58320 ']' 00:05:22.730 08:16:24 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58320 00:05:22.730 08:16:24 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:22.730 08:16:24 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.730 08:16:24 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58320 00:05:22.989 killing process with pid 58320 00:05:22.989 08:16:24 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:22.989 08:16:24 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:22.989 08:16:24 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58320' 00:05:22.989 08:16:24 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58320 00:05:22.989 08:16:24 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58320 00:05:23.248 [2024-10-15 08:16:24.786067] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:23.507 00:05:23.507 real 0m4.796s 00:05:23.507 user 0m8.307s 00:05:23.507 sys 0m0.414s 00:05:23.507 08:16:25 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.507 08:16:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.507 ************************************ 00:05:23.507 END TEST event_scheduler 00:05:23.507 ************************************ 00:05:23.507 08:16:25 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:23.507 08:16:25 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:23.507 08:16:25 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.507 08:16:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.507 08:16:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.507 ************************************ 00:05:23.507 START TEST app_repeat 00:05:23.507 ************************************ 00:05:23.507 08:16:25 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:23.507 08:16:25 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.507 08:16:25 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.507 08:16:25 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:23.507 08:16:25 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.507 08:16:25 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:23.507 08:16:25 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:23.507 08:16:25 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:23.507 08:16:25 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58412 00:05:23.507 Process app_repeat pid: 58412 00:05:23.507 08:16:25 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.507 08:16:25 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:23.507 08:16:25 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58412' 00:05:23.507 08:16:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:23.507 spdk_app_start Round 0 00:05:23.507 08:16:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:23.507 08:16:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58412 /var/tmp/spdk-nbd.sock 00:05:23.507 08:16:25 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58412 ']' 00:05:23.507 08:16:25 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.507 08:16:25 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.507 08:16:25 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.507 08:16:25 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.507 08:16:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:23.507 [2024-10-15 08:16:25.191333] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:05:23.507 [2024-10-15 08:16:25.191482] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58412 ] 00:05:23.765 [2024-10-15 08:16:25.329896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.765 [2024-10-15 08:16:25.413690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.765 [2024-10-15 08:16:25.413701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.765 [2024-10-15 08:16:25.488096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:24.024 08:16:25 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.024 08:16:25 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:24.024 08:16:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.282 Malloc0 00:05:24.282 08:16:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.540 Malloc1 00:05:24.541 08:16:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.541 08:16:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.541 08:16:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.541 08:16:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:24.541 08:16:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.541 08:16:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:24.541 08:16:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.541 08:16:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.541 08:16:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.541 08:16:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:24.541 08:16:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.541 08:16:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:24.541 08:16:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:24.541 08:16:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:24.541 08:16:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.541 08:16:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:24.799 /dev/nbd0 00:05:25.057 08:16:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:25.057 08:16:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:25.057 08:16:26 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:25.057 08:16:26 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:25.057 08:16:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:25.057 08:16:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:25.057 08:16:26 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:25.057 08:16:26 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:25.057 08:16:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:25.057 08:16:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:25.057 08:16:26 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.057 1+0 records in 00:05:25.057 1+0 records out 00:05:25.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489339 s, 8.4 MB/s 00:05:25.057 08:16:26 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.057 08:16:26 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:25.057 08:16:26 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.057 08:16:26 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:25.057 08:16:26 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:25.057 08:16:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.057 08:16:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.057 08:16:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:25.316 /dev/nbd1 00:05:25.316 08:16:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.316 08:16:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.316 08:16:26 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:25.316 08:16:26 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:25.316 08:16:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:25.316 08:16:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:25.316 08:16:26 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:25.316 08:16:26 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:25.316 08:16:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:25.316 08:16:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:25.316 08:16:26 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.316 1+0 records in 00:05:25.316 1+0 records out 00:05:25.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296948 s, 13.8 MB/s 00:05:25.316 08:16:26 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.316 08:16:26 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:25.316 08:16:26 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:25.316 08:16:26 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:25.316 08:16:26 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:25.316 08:16:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.316 08:16:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.316 08:16:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.316 08:16:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.316 08:16:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.576 08:16:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:25.576 { 00:05:25.576 "nbd_device": "/dev/nbd0", 00:05:25.576 "bdev_name": "Malloc0" 00:05:25.576 }, 00:05:25.576 { 00:05:25.576 "nbd_device": "/dev/nbd1", 00:05:25.576 "bdev_name": "Malloc1" 00:05:25.576 } 00:05:25.576 ]' 00:05:25.576 08:16:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:25.576 { 00:05:25.576 "nbd_device": "/dev/nbd0", 00:05:25.576 "bdev_name": "Malloc0" 00:05:25.576 }, 00:05:25.576 { 00:05:25.576 "nbd_device": "/dev/nbd1", 00:05:25.576 "bdev_name": "Malloc1" 00:05:25.576 } 00:05:25.576 ]' 00:05:25.576 08:16:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.576 08:16:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:25.576 /dev/nbd1' 00:05:25.576 08:16:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:25.576 /dev/nbd1' 00:05:25.576 08:16:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.576 08:16:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:25.576 08:16:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:25.576 08:16:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:25.576 08:16:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:25.576 08:16:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:25.576 08:16:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.576 08:16:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.576 08:16:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:25.576 08:16:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.576 08:16:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:25.576 08:16:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:25.576 256+0 records in 00:05:25.576 256+0 records out 00:05:25.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0085575 s, 123 MB/s 00:05:25.576 08:16:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.576 08:16:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:25.576 256+0 records in 00:05:25.576 256+0 records out 00:05:25.576 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244554 s, 42.9 MB/s 00:05:25.576 08:16:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.576 08:16:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:25.835 256+0 records in 00:05:25.835 256+0 records out 00:05:25.835 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265619 s, 39.5 MB/s 00:05:25.835 08:16:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:25.835 08:16:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.835 08:16:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.835 08:16:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:25.835 08:16:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.835 08:16:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:25.835 08:16:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:25.835 08:16:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.835 08:16:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:25.835 08:16:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.835 08:16:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:25.835 08:16:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.835 08:16:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:25.835 08:16:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.835 08:16:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.835 08:16:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:25.835 08:16:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:25.835 08:16:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.835 08:16:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:26.094 08:16:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:26.094 08:16:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:26.094 08:16:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:26.094 08:16:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.094 08:16:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.094 08:16:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:26.094 08:16:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.094 08:16:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.094 08:16:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.094 08:16:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.352 08:16:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.352 08:16:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.352 08:16:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.352 08:16:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.352 08:16:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.352 08:16:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.352 08:16:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.352 08:16:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.352 08:16:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.352 08:16:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.352 08:16:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.611 08:16:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:26.611 08:16:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.611 08:16:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:26.611 08:16:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:26.611 08:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:26.611 08:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.611 08:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:26.611 08:16:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:26.611 08:16:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:26.611 08:16:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:26.611 08:16:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:26.611 08:16:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:26.611 08:16:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:27.179 08:16:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:27.179 [2024-10-15 08:16:28.903877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.438 [2024-10-15 08:16:28.981551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.438 [2024-10-15 08:16:28.981565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.438 [2024-10-15 08:16:29.057381] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:27.438 [2024-10-15 08:16:29.057484] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:27.438 [2024-10-15 08:16:29.057498] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:29.971 spdk_app_start Round 1 00:05:29.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.971 08:16:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:29.971 08:16:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:29.971 08:16:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58412 /var/tmp/spdk-nbd.sock 00:05:29.971 08:16:31 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58412 ']' 00:05:29.971 08:16:31 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.971 08:16:31 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.971 08:16:31 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.971 08:16:31 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.971 08:16:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.537 08:16:32 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.537 08:16:32 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:30.537 08:16:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.794 Malloc0 00:05:30.794 08:16:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.053 Malloc1 00:05:31.053 08:16:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.053 08:16:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.053 08:16:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.053 08:16:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:31.053 08:16:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.053 08:16:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:31.053 08:16:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.053 08:16:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.053 08:16:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.053 08:16:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:31.053 08:16:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.053 08:16:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:31.053 08:16:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:31.053 08:16:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:31.053 08:16:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.053 08:16:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:31.619 /dev/nbd0 00:05:31.619 08:16:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:31.619 08:16:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:31.619 08:16:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:31.619 08:16:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:31.619 08:16:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:31.619 08:16:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:31.619 08:16:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:31.619 08:16:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:31.619 08:16:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:31.619 08:16:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:31.619 08:16:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.619 1+0 records in 00:05:31.619 1+0 records out 00:05:31.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306497 s, 13.4 MB/s 00:05:31.619 08:16:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.619 08:16:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:31.619 08:16:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.619 08:16:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:31.619 08:16:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:31.619 08:16:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.619 08:16:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.619 08:16:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:31.877 /dev/nbd1 00:05:31.877 08:16:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:31.877 08:16:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:31.877 08:16:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:31.877 08:16:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:31.877 08:16:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:31.877 08:16:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:31.877 08:16:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:31.877 08:16:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:31.877 08:16:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:31.877 08:16:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:31.877 08:16:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.877 1+0 records in 00:05:31.878 1+0 records out 00:05:31.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203092 s, 20.2 MB/s 00:05:31.878 08:16:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.878 08:16:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:31.878 08:16:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:31.878 08:16:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:31.878 08:16:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:31.878 08:16:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.878 08:16:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.878 08:16:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.878 08:16:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.878 08:16:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:32.136 { 00:05:32.136 "nbd_device": "/dev/nbd0", 00:05:32.136 "bdev_name": "Malloc0" 00:05:32.136 }, 00:05:32.136 { 00:05:32.136 "nbd_device": "/dev/nbd1", 00:05:32.136 "bdev_name": "Malloc1" 00:05:32.136 } 00:05:32.136 ]' 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:32.136 { 00:05:32.136 "nbd_device": "/dev/nbd0", 00:05:32.136 "bdev_name": "Malloc0" 00:05:32.136 }, 00:05:32.136 { 00:05:32.136 "nbd_device": "/dev/nbd1", 00:05:32.136 "bdev_name": "Malloc1" 00:05:32.136 } 00:05:32.136 ]' 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:32.136 /dev/nbd1' 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:32.136 /dev/nbd1' 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:32.136 256+0 records in 00:05:32.136 256+0 records out 00:05:32.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00838295 s, 125 MB/s 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:32.136 256+0 records in 00:05:32.136 256+0 records out 00:05:32.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223137 s, 47.0 MB/s 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:32.136 256+0 records in 00:05:32.136 256+0 records out 00:05:32.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254532 s, 41.2 MB/s 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.136 08:16:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:32.704 08:16:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:32.704 08:16:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:32.704 08:16:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:32.704 08:16:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.704 08:16:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.704 08:16:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:32.704 08:16:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.704 08:16:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.704 08:16:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.704 08:16:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:32.962 08:16:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:32.962 08:16:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:32.962 08:16:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:32.962 08:16:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.962 08:16:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.962 08:16:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:32.962 08:16:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.962 08:16:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.962 08:16:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.962 08:16:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.962 08:16:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.221 08:16:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:33.221 08:16:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:33.221 08:16:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.221 08:16:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:33.221 08:16:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:33.221 08:16:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.221 08:16:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:33.221 08:16:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:33.221 08:16:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:33.221 08:16:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:33.221 08:16:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:33.221 08:16:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:33.221 08:16:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:33.479 08:16:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:33.738 [2024-10-15 08:16:35.382077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.738 [2024-10-15 08:16:35.459690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.738 [2024-10-15 08:16:35.459699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.996 [2024-10-15 08:16:35.538040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:33.996 [2024-10-15 08:16:35.538201] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:33.996 [2024-10-15 08:16:35.538217] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:36.525 spdk_app_start Round 2 00:05:36.525 08:16:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:36.525 08:16:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:36.525 08:16:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58412 /var/tmp/spdk-nbd.sock 00:05:36.525 08:16:38 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58412 ']' 00:05:36.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.525 08:16:38 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.525 08:16:38 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.525 08:16:38 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.525 08:16:38 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.525 08:16:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.783 08:16:38 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.783 08:16:38 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:36.783 08:16:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.348 Malloc0 00:05:37.348 08:16:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.606 Malloc1 00:05:37.606 08:16:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.606 08:16:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.606 08:16:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.606 08:16:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.606 08:16:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.606 08:16:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.606 08:16:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.606 08:16:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.606 08:16:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.606 08:16:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.606 08:16:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.606 08:16:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.606 08:16:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:37.606 08:16:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.606 08:16:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.606 08:16:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:37.865 /dev/nbd0 00:05:37.865 08:16:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:37.865 08:16:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:37.865 08:16:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:37.865 08:16:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:37.865 08:16:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:37.865 08:16:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:37.865 08:16:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:37.865 08:16:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:37.865 08:16:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:37.865 08:16:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:37.865 08:16:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.865 1+0 records in 00:05:37.865 1+0 records out 00:05:37.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347786 s, 11.8 MB/s 00:05:37.865 08:16:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.865 08:16:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:37.865 08:16:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:37.865 08:16:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:37.865 08:16:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:37.865 08:16:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.865 08:16:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.865 08:16:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.124 /dev/nbd1 00:05:38.124 08:16:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.124 08:16:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.124 08:16:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:38.124 08:16:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:38.124 08:16:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:38.124 08:16:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:38.124 08:16:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:38.124 08:16:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:38.124 08:16:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:38.124 08:16:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:38.124 08:16:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.124 1+0 records in 00:05:38.124 1+0 records out 00:05:38.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247383 s, 16.6 MB/s 00:05:38.124 08:16:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.124 08:16:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:38.124 08:16:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.124 08:16:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:38.124 08:16:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:38.124 08:16:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.124 08:16:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.124 08:16:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.124 08:16:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.124 08:16:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.692 08:16:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.692 { 00:05:38.692 "nbd_device": "/dev/nbd0", 00:05:38.692 "bdev_name": "Malloc0" 00:05:38.692 }, 00:05:38.692 { 00:05:38.692 "nbd_device": "/dev/nbd1", 00:05:38.692 "bdev_name": "Malloc1" 00:05:38.692 } 00:05:38.692 ]' 00:05:38.692 08:16:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.692 { 00:05:38.692 "nbd_device": "/dev/nbd0", 00:05:38.692 "bdev_name": "Malloc0" 00:05:38.692 }, 00:05:38.692 { 00:05:38.692 "nbd_device": "/dev/nbd1", 00:05:38.692 "bdev_name": "Malloc1" 00:05:38.692 } 00:05:38.692 ]' 00:05:38.692 08:16:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.692 08:16:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.692 /dev/nbd1' 00:05:38.692 08:16:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.692 /dev/nbd1' 00:05:38.692 08:16:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.692 08:16:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.693 256+0 records in 00:05:38.693 256+0 records out 00:05:38.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105281 s, 99.6 MB/s 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.693 256+0 records in 00:05:38.693 256+0 records out 00:05:38.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02302 s, 45.6 MB/s 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.693 256+0 records in 00:05:38.693 256+0 records out 00:05:38.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250385 s, 41.9 MB/s 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.693 08:16:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:38.951 08:16:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:38.951 08:16:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:38.951 08:16:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:38.951 08:16:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.951 08:16:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.951 08:16:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:38.951 08:16:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.951 08:16:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.951 08:16:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.951 08:16:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.208 08:16:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.208 08:16:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.208 08:16:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.208 08:16:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.208 08:16:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.208 08:16:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.208 08:16:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.208 08:16:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.208 08:16:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.208 08:16:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.208 08:16:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.774 08:16:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.774 08:16:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.774 08:16:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.774 08:16:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.774 08:16:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.774 08:16:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.774 08:16:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.774 08:16:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.774 08:16:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.774 08:16:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.774 08:16:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.774 08:16:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.774 08:16:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.032 08:16:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:40.290 [2024-10-15 08:16:41.846364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.290 [2024-10-15 08:16:41.917269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.290 [2024-10-15 08:16:41.917274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.290 [2024-10-15 08:16:41.991501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.290 [2024-10-15 08:16:41.991622] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.290 [2024-10-15 08:16:41.991637] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.573 08:16:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58412 /var/tmp/spdk-nbd.sock 00:05:43.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.573 08:16:44 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58412 ']' 00:05:43.573 08:16:44 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.573 08:16:44 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.573 08:16:44 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.573 08:16:44 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.573 08:16:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.573 08:16:44 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.573 08:16:44 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:43.573 08:16:44 event.app_repeat -- event/event.sh@39 -- # killprocess 58412 00:05:43.573 08:16:44 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58412 ']' 00:05:43.573 08:16:44 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58412 00:05:43.573 08:16:44 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:43.573 08:16:44 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.573 08:16:44 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58412 00:05:43.573 killing process with pid 58412 00:05:43.573 08:16:44 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.573 08:16:44 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.573 08:16:44 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58412' 00:05:43.573 08:16:44 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58412 00:05:43.573 08:16:44 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58412 00:05:43.573 spdk_app_start is called in Round 0. 00:05:43.573 Shutdown signal received, stop current app iteration 00:05:43.573 Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 reinitialization... 00:05:43.573 spdk_app_start is called in Round 1. 00:05:43.573 Shutdown signal received, stop current app iteration 00:05:43.573 Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 reinitialization... 00:05:43.573 spdk_app_start is called in Round 2. 00:05:43.573 Shutdown signal received, stop current app iteration 00:05:43.573 Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 reinitialization... 00:05:43.574 spdk_app_start is called in Round 3. 00:05:43.574 Shutdown signal received, stop current app iteration 00:05:43.574 08:16:45 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:43.574 08:16:45 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:43.574 00:05:43.574 real 0m20.056s 00:05:43.574 user 0m45.694s 00:05:43.574 sys 0m3.282s 00:05:43.574 08:16:45 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.574 08:16:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.574 ************************************ 00:05:43.574 END TEST app_repeat 00:05:43.574 ************************************ 00:05:43.574 08:16:45 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:43.574 08:16:45 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:43.574 08:16:45 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.574 08:16:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.574 08:16:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.574 ************************************ 00:05:43.574 START TEST cpu_locks 00:05:43.574 ************************************ 00:05:43.574 08:16:45 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:43.832 * Looking for test storage... 00:05:43.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:43.832 08:16:45 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:43.832 08:16:45 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:43.832 08:16:45 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:43.832 08:16:45 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.832 08:16:45 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:43.832 08:16:45 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.832 08:16:45 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:43.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.832 --rc genhtml_branch_coverage=1 00:05:43.832 --rc genhtml_function_coverage=1 00:05:43.832 --rc genhtml_legend=1 00:05:43.832 --rc geninfo_all_blocks=1 00:05:43.832 --rc geninfo_unexecuted_blocks=1 00:05:43.832 00:05:43.832 ' 00:05:43.832 08:16:45 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:43.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.832 --rc genhtml_branch_coverage=1 00:05:43.832 --rc genhtml_function_coverage=1 00:05:43.832 --rc genhtml_legend=1 00:05:43.832 --rc geninfo_all_blocks=1 00:05:43.832 --rc geninfo_unexecuted_blocks=1 00:05:43.832 00:05:43.832 ' 00:05:43.832 08:16:45 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:43.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.832 --rc genhtml_branch_coverage=1 00:05:43.832 --rc genhtml_function_coverage=1 00:05:43.832 --rc genhtml_legend=1 00:05:43.832 --rc geninfo_all_blocks=1 00:05:43.832 --rc geninfo_unexecuted_blocks=1 00:05:43.832 00:05:43.832 ' 00:05:43.832 08:16:45 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:43.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.832 --rc genhtml_branch_coverage=1 00:05:43.832 --rc genhtml_function_coverage=1 00:05:43.832 --rc genhtml_legend=1 00:05:43.832 --rc geninfo_all_blocks=1 00:05:43.832 --rc geninfo_unexecuted_blocks=1 00:05:43.832 00:05:43.832 ' 00:05:43.832 08:16:45 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:43.832 08:16:45 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:43.832 08:16:45 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:43.832 08:16:45 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:43.832 08:16:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.832 08:16:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.832 08:16:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.832 ************************************ 00:05:43.832 START TEST default_locks 00:05:43.832 ************************************ 00:05:43.832 08:16:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:43.832 08:16:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58867 00:05:43.832 08:16:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.832 08:16:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58867 00:05:43.832 08:16:45 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58867 ']' 00:05:43.832 08:16:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.832 08:16:45 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.832 08:16:45 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.832 08:16:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.832 08:16:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.832 [2024-10-15 08:16:45.536980] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:05:43.832 [2024-10-15 08:16:45.537404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58867 ] 00:05:44.090 [2024-10-15 08:16:45.674385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.090 [2024-10-15 08:16:45.755372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.349 [2024-10-15 08:16:45.852344] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:44.625 08:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.625 08:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:44.625 08:16:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58867 00:05:44.625 08:16:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58867 00:05:44.625 08:16:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.882 08:16:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58867 00:05:44.882 08:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58867 ']' 00:05:44.882 08:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58867 00:05:44.882 08:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:44.882 08:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:44.882 08:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58867 00:05:44.882 killing process with pid 58867 00:05:44.882 08:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:44.882 08:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:44.882 08:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58867' 00:05:44.882 08:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58867 00:05:44.882 08:16:46 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58867 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58867 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58867 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58867 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58867 ']' 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.449 ERROR: process (pid: 58867) is no longer running 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.449 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58867) - No such process 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:45.449 00:05:45.449 real 0m1.624s 00:05:45.449 user 0m1.513s 00:05:45.449 sys 0m0.639s 00:05:45.449 ************************************ 00:05:45.449 END TEST default_locks 00:05:45.449 ************************************ 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.449 08:16:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.449 08:16:47 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:45.449 08:16:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.449 08:16:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.449 08:16:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.449 ************************************ 00:05:45.449 START TEST default_locks_via_rpc 00:05:45.449 ************************************ 00:05:45.449 08:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:45.449 08:16:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58912 00:05:45.449 08:16:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58912 00:05:45.449 08:16:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.449 08:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58912 ']' 00:05:45.449 08:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.449 08:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.449 08:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.449 08:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.449 08:16:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.707 [2024-10-15 08:16:47.221015] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:05:45.707 [2024-10-15 08:16:47.221513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58912 ] 00:05:45.707 [2024-10-15 08:16:47.361777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.965 [2024-10-15 08:16:47.437701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.965 [2024-10-15 08:16:47.533218] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.532 08:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.532 08:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:46.532 08:16:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:46.532 08:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.532 08:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.532 08:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.532 08:16:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:46.532 08:16:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:46.532 08:16:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:46.532 08:16:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:46.532 08:16:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:46.532 08:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.532 08:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.532 08:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.532 08:16:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58912 00:05:46.532 08:16:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58912 00:05:46.532 08:16:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.097 08:16:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58912 00:05:47.097 08:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58912 ']' 00:05:47.097 08:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58912 00:05:47.097 08:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:47.097 08:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.097 08:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58912 00:05:47.097 killing process with pid 58912 00:05:47.097 08:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.097 08:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.097 08:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58912' 00:05:47.097 08:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58912 00:05:47.097 08:16:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58912 00:05:47.662 00:05:47.662 real 0m2.026s 00:05:47.662 user 0m2.112s 00:05:47.662 sys 0m0.655s 00:05:47.662 08:16:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.662 ************************************ 00:05:47.662 END TEST default_locks_via_rpc 00:05:47.662 08:16:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.662 ************************************ 00:05:47.662 08:16:49 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:47.662 08:16:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.662 08:16:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.662 08:16:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.662 ************************************ 00:05:47.662 START TEST non_locking_app_on_locked_coremask 00:05:47.662 ************************************ 00:05:47.662 08:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:47.662 08:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58963 00:05:47.662 08:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58963 /var/tmp/spdk.sock 00:05:47.662 08:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.663 08:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58963 ']' 00:05:47.663 08:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.663 08:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.663 08:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.663 08:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.663 08:16:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.663 [2024-10-15 08:16:49.297167] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:05:47.663 [2024-10-15 08:16:49.297702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58963 ] 00:05:47.920 [2024-10-15 08:16:49.433641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.920 [2024-10-15 08:16:49.514826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.920 [2024-10-15 08:16:49.610136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.877 08:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.877 08:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:48.877 08:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58979 00:05:48.877 08:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:48.877 08:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58979 /var/tmp/spdk2.sock 00:05:48.877 08:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58979 ']' 00:05:48.877 08:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.877 08:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.877 08:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.877 08:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.877 08:16:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.877 [2024-10-15 08:16:50.368719] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:05:48.877 [2024-10-15 08:16:50.368831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58979 ] 00:05:48.877 [2024-10-15 08:16:50.513547] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:48.877 [2024-10-15 08:16:50.513640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.164 [2024-10-15 08:16:50.681257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.420 [2024-10-15 08:16:50.896320] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:49.985 08:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.985 08:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:49.985 08:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58963 00:05:49.985 08:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58963 00:05:49.985 08:16:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.551 08:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58963 00:05:50.551 08:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58963 ']' 00:05:50.551 08:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58963 00:05:50.551 08:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:50.551 08:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.551 08:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58963 00:05:50.551 08:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:50.551 08:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:50.551 killing process with pid 58963 00:05:50.551 08:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58963' 00:05:50.551 08:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58963 00:05:50.551 08:16:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58963 00:05:51.926 08:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58979 00:05:51.926 08:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58979 ']' 00:05:51.926 08:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58979 00:05:51.926 08:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:51.926 08:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.926 08:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58979 00:05:51.926 killing process with pid 58979 00:05:51.926 08:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.926 08:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.926 08:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58979' 00:05:51.926 08:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58979 00:05:51.926 08:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58979 00:05:52.492 ************************************ 00:05:52.492 END TEST non_locking_app_on_locked_coremask 00:05:52.492 ************************************ 00:05:52.492 00:05:52.492 real 0m4.699s 00:05:52.492 user 0m5.067s 00:05:52.492 sys 0m1.360s 00:05:52.492 08:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.492 08:16:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.492 08:16:53 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:52.492 08:16:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.492 08:16:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.492 08:16:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.492 ************************************ 00:05:52.492 START TEST locking_app_on_unlocked_coremask 00:05:52.492 ************************************ 00:05:52.492 08:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:52.492 08:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59051 00:05:52.492 08:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:52.492 08:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59051 /var/tmp/spdk.sock 00:05:52.492 08:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59051 ']' 00:05:52.492 08:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.492 08:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.492 08:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.492 08:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.492 08:16:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.492 [2024-10-15 08:16:54.049752] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:05:52.492 [2024-10-15 08:16:54.049871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59051 ] 00:05:52.492 [2024-10-15 08:16:54.189915] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.492 [2024-10-15 08:16:54.189980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.764 [2024-10-15 08:16:54.268531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.764 [2024-10-15 08:16:54.363338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.343 08:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.343 08:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:53.343 08:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59067 00:05:53.343 08:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:53.343 08:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59067 /var/tmp/spdk2.sock 00:05:53.343 08:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59067 ']' 00:05:53.343 08:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.343 08:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.343 08:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.343 08:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.343 08:16:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.602 [2024-10-15 08:16:55.113415] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:05:53.602 [2024-10-15 08:16:55.114250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59067 ] 00:05:53.602 [2024-10-15 08:16:55.254658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.860 [2024-10-15 08:16:55.425603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.118 [2024-10-15 08:16:55.619134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.712 08:16:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.712 08:16:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:54.712 08:16:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59067 00:05:54.712 08:16:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59067 00:05:54.712 08:16:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.278 08:16:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59051 00:05:55.278 08:16:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59051 ']' 00:05:55.278 08:16:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59051 00:05:55.278 08:16:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:55.278 08:16:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.278 08:16:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59051 00:05:55.278 killing process with pid 59051 00:05:55.278 08:16:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.278 08:16:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.278 08:16:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59051' 00:05:55.278 08:16:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59051 00:05:55.278 08:16:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59051 00:05:56.737 08:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59067 00:05:56.737 08:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59067 ']' 00:05:56.737 08:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59067 00:05:56.737 08:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:56.737 08:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.737 08:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59067 00:05:56.737 killing process with pid 59067 00:05:56.737 08:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.737 08:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.737 08:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59067' 00:05:56.737 08:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59067 00:05:56.737 08:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59067 00:05:57.000 00:05:57.000 real 0m4.718s 00:05:57.000 user 0m5.068s 00:05:57.000 sys 0m1.332s 00:05:57.000 08:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.000 ************************************ 00:05:57.000 END TEST locking_app_on_unlocked_coremask 00:05:57.000 ************************************ 00:05:57.000 08:16:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.259 08:16:58 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:57.259 08:16:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.259 08:16:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.259 08:16:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.259 ************************************ 00:05:57.259 START TEST locking_app_on_locked_coremask 00:05:57.259 ************************************ 00:05:57.259 08:16:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:57.259 08:16:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59140 00:05:57.259 08:16:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59140 /var/tmp/spdk.sock 00:05:57.259 08:16:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59140 ']' 00:05:57.259 08:16:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.259 08:16:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.259 08:16:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.259 08:16:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.259 08:16:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.259 08:16:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.259 [2024-10-15 08:16:58.829220] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:05:57.259 [2024-10-15 08:16:58.829356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59140 ] 00:05:57.259 [2024-10-15 08:16:58.971241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.517 [2024-10-15 08:16:59.056387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.517 [2024-10-15 08:16:59.168974] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.453 08:16:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.453 08:16:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:58.453 08:16:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:58.453 08:16:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59156 00:05:58.453 08:16:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59156 /var/tmp/spdk2.sock 00:05:58.453 08:16:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:58.453 08:16:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59156 /var/tmp/spdk2.sock 00:05:58.453 08:16:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:58.453 08:16:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.453 08:16:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:58.453 08:16:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.453 08:16:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59156 /var/tmp/spdk2.sock 00:05:58.453 08:16:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59156 ']' 00:05:58.453 08:16:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.453 08:16:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.453 08:16:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.453 08:16:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.453 08:16:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.453 [2024-10-15 08:16:59.958451] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:05:58.453 [2024-10-15 08:16:59.958894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59156 ] 00:05:58.453 [2024-10-15 08:17:00.099925] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59140 has claimed it. 00:05:58.453 [2024-10-15 08:17:00.100032] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:59.021 ERROR: process (pid: 59156) is no longer running 00:05:59.021 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59156) - No such process 00:05:59.021 08:17:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.021 08:17:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:59.021 08:17:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:59.021 08:17:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:59.021 08:17:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:59.021 08:17:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:59.021 08:17:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59140 00:05:59.021 08:17:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59140 00:05:59.021 08:17:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.589 08:17:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59140 00:05:59.589 08:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59140 ']' 00:05:59.589 08:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59140 00:05:59.589 08:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:59.589 08:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.589 08:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59140 00:05:59.589 killing process with pid 59140 00:05:59.589 08:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.589 08:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.589 08:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59140' 00:05:59.589 08:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59140 00:05:59.589 08:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59140 00:06:00.156 00:06:00.156 real 0m2.944s 00:06:00.156 user 0m3.374s 00:06:00.156 sys 0m0.756s 00:06:00.156 08:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.156 ************************************ 00:06:00.156 END TEST locking_app_on_locked_coremask 00:06:00.156 ************************************ 00:06:00.156 08:17:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.156 08:17:01 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:00.156 08:17:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.156 08:17:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.156 08:17:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.156 ************************************ 00:06:00.156 START TEST locking_overlapped_coremask 00:06:00.156 ************************************ 00:06:00.156 08:17:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:00.156 08:17:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59207 00:06:00.156 08:17:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:00.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.156 08:17:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59207 /var/tmp/spdk.sock 00:06:00.156 08:17:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59207 ']' 00:06:00.156 08:17:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.156 08:17:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.156 08:17:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.156 08:17:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.156 08:17:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.156 [2024-10-15 08:17:01.817397] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:00.156 [2024-10-15 08:17:01.817739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59207 ] 00:06:00.415 [2024-10-15 08:17:01.952999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.415 [2024-10-15 08:17:02.035521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.415 [2024-10-15 08:17:02.035567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.415 [2024-10-15 08:17:02.035573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.415 [2024-10-15 08:17:02.137996] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.675 08:17:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.676 08:17:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:00.676 08:17:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59224 00:06:00.676 08:17:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59224 /var/tmp/spdk2.sock 00:06:00.676 08:17:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:00.676 08:17:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:00.676 08:17:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59224 /var/tmp/spdk2.sock 00:06:00.676 08:17:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:00.676 08:17:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.676 08:17:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:00.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.676 08:17:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.676 08:17:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59224 /var/tmp/spdk2.sock 00:06:00.676 08:17:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59224 ']' 00:06:00.676 08:17:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.676 08:17:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.676 08:17:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.676 08:17:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.676 08:17:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.940 [2024-10-15 08:17:02.469074] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:00.940 [2024-10-15 08:17:02.469211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59224 ] 00:06:00.940 [2024-10-15 08:17:02.615218] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59207 has claimed it. 00:06:00.940 [2024-10-15 08:17:02.615300] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:01.877 ERROR: process (pid: 59224) is no longer running 00:06:01.877 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59224) - No such process 00:06:01.877 08:17:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.877 08:17:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:01.877 08:17:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:01.877 08:17:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:01.877 08:17:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:01.877 08:17:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:01.877 08:17:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:01.877 08:17:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.877 08:17:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.877 08:17:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.877 08:17:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59207 00:06:01.877 08:17:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59207 ']' 00:06:01.877 08:17:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59207 00:06:01.877 08:17:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:01.877 08:17:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.877 08:17:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59207 00:06:01.877 08:17:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.877 08:17:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.877 08:17:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59207' 00:06:01.877 killing process with pid 59207 00:06:01.877 08:17:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59207 00:06:01.877 08:17:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59207 00:06:02.137 00:06:02.137 real 0m2.044s 00:06:02.137 user 0m5.533s 00:06:02.137 sys 0m0.483s 00:06:02.137 08:17:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.137 08:17:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.137 ************************************ 00:06:02.137 END TEST locking_overlapped_coremask 00:06:02.137 ************************************ 00:06:02.137 08:17:03 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:02.137 08:17:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.137 08:17:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.137 08:17:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.137 ************************************ 00:06:02.137 START TEST locking_overlapped_coremask_via_rpc 00:06:02.137 ************************************ 00:06:02.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.137 08:17:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:02.137 08:17:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59265 00:06:02.137 08:17:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:02.137 08:17:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59265 /var/tmp/spdk.sock 00:06:02.137 08:17:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59265 ']' 00:06:02.137 08:17:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.137 08:17:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.137 08:17:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.137 08:17:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.137 08:17:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.396 [2024-10-15 08:17:03.924424] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:02.396 [2024-10-15 08:17:03.924558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59265 ] 00:06:02.396 [2024-10-15 08:17:04.059655] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.396 [2024-10-15 08:17:04.060025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.654 [2024-10-15 08:17:04.138820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.654 [2024-10-15 08:17:04.138955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.654 [2024-10-15 08:17:04.139206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.654 [2024-10-15 08:17:04.234535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.912 08:17:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.912 08:17:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:02.912 08:17:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59281 00:06:02.912 08:17:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59281 /var/tmp/spdk2.sock 00:06:02.912 08:17:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59281 ']' 00:06:02.912 08:17:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:02.912 08:17:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.912 08:17:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.912 08:17:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.912 08:17:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.912 08:17:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.912 [2024-10-15 08:17:04.553983] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:02.913 [2024-10-15 08:17:04.554095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59281 ] 00:06:03.171 [2024-10-15 08:17:04.700410] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.171 [2024-10-15 08:17:04.700484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.171 [2024-10-15 08:17:04.864837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.171 [2024-10-15 08:17:04.868252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.171 [2024-10-15 08:17:04.868252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:03.430 [2024-10-15 08:17:05.052188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.998 [2024-10-15 08:17:05.612241] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59265 has claimed it. 00:06:03.998 request: 00:06:03.998 { 00:06:03.998 "method": "framework_enable_cpumask_locks", 00:06:03.998 "req_id": 1 00:06:03.998 } 00:06:03.998 Got JSON-RPC error response 00:06:03.998 response: 00:06:03.998 { 00:06:03.998 "code": -32603, 00:06:03.998 "message": "Failed to claim CPU core: 2" 00:06:03.998 } 00:06:03.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59265 /var/tmp/spdk.sock 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59265 ']' 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.998 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.257 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.257 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:04.257 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59281 /var/tmp/spdk2.sock 00:06:04.257 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59281 ']' 00:06:04.257 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.257 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.257 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.257 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.257 08:17:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.516 08:17:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.516 08:17:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:04.516 08:17:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:04.516 08:17:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:04.516 08:17:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:04.516 08:17:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:04.516 00:06:04.516 real 0m2.343s 00:06:04.516 user 0m1.268s 00:06:04.516 sys 0m0.213s 00:06:04.516 08:17:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.516 08:17:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.516 ************************************ 00:06:04.516 END TEST locking_overlapped_coremask_via_rpc 00:06:04.516 ************************************ 00:06:04.516 08:17:06 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:04.516 08:17:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59265 ]] 00:06:04.516 08:17:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59265 00:06:04.516 08:17:06 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59265 ']' 00:06:04.516 08:17:06 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59265 00:06:04.516 08:17:06 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:04.516 08:17:06 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.773 08:17:06 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59265 00:06:04.773 killing process with pid 59265 00:06:04.773 08:17:06 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.773 08:17:06 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.773 08:17:06 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59265' 00:06:04.773 08:17:06 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59265 00:06:04.773 08:17:06 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59265 00:06:05.337 08:17:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59281 ]] 00:06:05.337 08:17:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59281 00:06:05.337 08:17:06 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59281 ']' 00:06:05.337 08:17:06 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59281 00:06:05.337 08:17:06 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:05.337 08:17:06 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.337 08:17:06 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59281 00:06:05.337 killing process with pid 59281 00:06:05.337 08:17:06 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:05.337 08:17:06 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:05.337 08:17:06 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59281' 00:06:05.337 08:17:06 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59281 00:06:05.337 08:17:06 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59281 00:06:05.903 08:17:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:05.903 08:17:07 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:05.903 08:17:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59265 ]] 00:06:05.903 08:17:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59265 00:06:05.903 08:17:07 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59265 ']' 00:06:05.903 08:17:07 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59265 00:06:05.903 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59265) - No such process 00:06:05.903 Process with pid 59265 is not found 00:06:05.903 08:17:07 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59265 is not found' 00:06:05.903 08:17:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59281 ]] 00:06:05.903 08:17:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59281 00:06:05.903 08:17:07 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59281 ']' 00:06:05.903 08:17:07 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59281 00:06:05.904 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59281) - No such process 00:06:05.904 Process with pid 59281 is not found 00:06:05.904 08:17:07 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59281 is not found' 00:06:05.904 08:17:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:05.904 00:06:05.904 real 0m22.099s 00:06:05.904 user 0m36.912s 00:06:05.904 sys 0m6.470s 00:06:05.904 08:17:07 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.904 08:17:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.904 ************************************ 00:06:05.904 END TEST cpu_locks 00:06:05.904 ************************************ 00:06:05.904 ************************************ 00:06:05.904 END TEST event 00:06:05.904 ************************************ 00:06:05.904 00:06:05.904 real 0m51.446s 00:06:05.904 user 1m37.577s 00:06:05.904 sys 0m10.629s 00:06:05.904 08:17:07 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.904 08:17:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.904 08:17:07 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:05.904 08:17:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.904 08:17:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.904 08:17:07 -- common/autotest_common.sh@10 -- # set +x 00:06:05.904 ************************************ 00:06:05.904 START TEST thread 00:06:05.904 ************************************ 00:06:05.904 08:17:07 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:05.904 * Looking for test storage... 00:06:05.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:05.904 08:17:07 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:05.904 08:17:07 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:05.904 08:17:07 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:05.904 08:17:07 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:05.904 08:17:07 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.904 08:17:07 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.904 08:17:07 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.904 08:17:07 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.904 08:17:07 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.904 08:17:07 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.904 08:17:07 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.904 08:17:07 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.904 08:17:07 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.904 08:17:07 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.904 08:17:07 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.904 08:17:07 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:05.904 08:17:07 thread -- scripts/common.sh@345 -- # : 1 00:06:05.904 08:17:07 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.904 08:17:07 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.904 08:17:07 thread -- scripts/common.sh@365 -- # decimal 1 00:06:05.904 08:17:07 thread -- scripts/common.sh@353 -- # local d=1 00:06:05.904 08:17:07 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.904 08:17:07 thread -- scripts/common.sh@355 -- # echo 1 00:06:05.904 08:17:07 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.904 08:17:07 thread -- scripts/common.sh@366 -- # decimal 2 00:06:06.162 08:17:07 thread -- scripts/common.sh@353 -- # local d=2 00:06:06.162 08:17:07 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.162 08:17:07 thread -- scripts/common.sh@355 -- # echo 2 00:06:06.162 08:17:07 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.162 08:17:07 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.162 08:17:07 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.162 08:17:07 thread -- scripts/common.sh@368 -- # return 0 00:06:06.162 08:17:07 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.162 08:17:07 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:06.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.162 --rc genhtml_branch_coverage=1 00:06:06.162 --rc genhtml_function_coverage=1 00:06:06.162 --rc genhtml_legend=1 00:06:06.162 --rc geninfo_all_blocks=1 00:06:06.162 --rc geninfo_unexecuted_blocks=1 00:06:06.162 00:06:06.162 ' 00:06:06.162 08:17:07 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:06.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.162 --rc genhtml_branch_coverage=1 00:06:06.162 --rc genhtml_function_coverage=1 00:06:06.162 --rc genhtml_legend=1 00:06:06.162 --rc geninfo_all_blocks=1 00:06:06.162 --rc geninfo_unexecuted_blocks=1 00:06:06.162 00:06:06.162 ' 00:06:06.162 08:17:07 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:06.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.162 --rc genhtml_branch_coverage=1 00:06:06.162 --rc genhtml_function_coverage=1 00:06:06.162 --rc genhtml_legend=1 00:06:06.162 --rc geninfo_all_blocks=1 00:06:06.162 --rc geninfo_unexecuted_blocks=1 00:06:06.162 00:06:06.162 ' 00:06:06.162 08:17:07 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:06.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.162 --rc genhtml_branch_coverage=1 00:06:06.162 --rc genhtml_function_coverage=1 00:06:06.162 --rc genhtml_legend=1 00:06:06.162 --rc geninfo_all_blocks=1 00:06:06.162 --rc geninfo_unexecuted_blocks=1 00:06:06.162 00:06:06.162 ' 00:06:06.162 08:17:07 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:06.162 08:17:07 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:06.162 08:17:07 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.162 08:17:07 thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.162 ************************************ 00:06:06.162 START TEST thread_poller_perf 00:06:06.162 ************************************ 00:06:06.162 08:17:07 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:06.162 [2024-10-15 08:17:07.664088] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:06.162 [2024-10-15 08:17:07.664532] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59417 ] 00:06:06.162 [2024-10-15 08:17:07.796972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.162 [2024-10-15 08:17:07.873854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.162 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:07.533 [2024-10-15T08:17:09.264Z] ====================================== 00:06:07.533 [2024-10-15T08:17:09.264Z] busy:2211226377 (cyc) 00:06:07.533 [2024-10-15T08:17:09.264Z] total_run_count: 316000 00:06:07.533 [2024-10-15T08:17:09.264Z] tsc_hz: 2200000000 (cyc) 00:06:07.533 [2024-10-15T08:17:09.264Z] ====================================== 00:06:07.533 [2024-10-15T08:17:09.264Z] poller_cost: 6997 (cyc), 3180 (nsec) 00:06:07.533 00:06:07.533 real 0m1.301s 00:06:07.533 user 0m1.146s 00:06:07.533 sys 0m0.047s 00:06:07.533 08:17:08 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.533 08:17:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.533 ************************************ 00:06:07.533 END TEST thread_poller_perf 00:06:07.533 ************************************ 00:06:07.533 08:17:08 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:07.533 08:17:08 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:07.533 08:17:08 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.533 08:17:08 thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.533 ************************************ 00:06:07.533 START TEST thread_poller_perf 00:06:07.533 ************************************ 00:06:07.533 08:17:09 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:07.533 [2024-10-15 08:17:09.025763] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:07.533 [2024-10-15 08:17:09.025872] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59447 ] 00:06:07.533 [2024-10-15 08:17:09.161969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.533 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:07.533 [2024-10-15 08:17:09.241743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.925 [2024-10-15T08:17:10.656Z] ====================================== 00:06:08.925 [2024-10-15T08:17:10.656Z] busy:2202562767 (cyc) 00:06:08.925 [2024-10-15T08:17:10.656Z] total_run_count: 4208000 00:06:08.925 [2024-10-15T08:17:10.656Z] tsc_hz: 2200000000 (cyc) 00:06:08.925 [2024-10-15T08:17:10.656Z] ====================================== 00:06:08.925 [2024-10-15T08:17:10.656Z] poller_cost: 523 (cyc), 237 (nsec) 00:06:08.925 ************************************ 00:06:08.925 END TEST thread_poller_perf 00:06:08.925 ************************************ 00:06:08.925 00:06:08.925 real 0m1.306s 00:06:08.925 user 0m1.151s 00:06:08.925 sys 0m0.047s 00:06:08.925 08:17:10 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.925 08:17:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.925 08:17:10 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:08.925 00:06:08.925 real 0m2.900s 00:06:08.925 user 0m2.448s 00:06:08.925 sys 0m0.235s 00:06:08.925 ************************************ 00:06:08.925 END TEST thread 00:06:08.925 ************************************ 00:06:08.925 08:17:10 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.925 08:17:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.925 08:17:10 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:08.925 08:17:10 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:08.925 08:17:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.925 08:17:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.925 08:17:10 -- common/autotest_common.sh@10 -- # set +x 00:06:08.925 ************************************ 00:06:08.925 START TEST app_cmdline 00:06:08.925 ************************************ 00:06:08.925 08:17:10 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:08.925 * Looking for test storage... 00:06:08.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:08.925 08:17:10 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:08.925 08:17:10 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:08.925 08:17:10 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:08.925 08:17:10 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:08.925 08:17:10 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.925 08:17:10 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.925 08:17:10 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.925 08:17:10 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.925 08:17:10 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.925 08:17:10 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.925 08:17:10 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.925 08:17:10 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.925 08:17:10 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.925 08:17:10 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.925 08:17:10 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.925 08:17:10 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:08.925 08:17:10 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:08.925 08:17:10 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.925 08:17:10 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.925 08:17:10 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:08.925 08:17:10 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:08.925 08:17:10 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.925 08:17:10 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:08.925 08:17:10 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.926 08:17:10 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:08.926 08:17:10 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:08.926 08:17:10 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.926 08:17:10 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:08.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.926 08:17:10 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.926 08:17:10 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.926 08:17:10 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.926 08:17:10 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:08.926 08:17:10 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.926 08:17:10 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:08.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.926 --rc genhtml_branch_coverage=1 00:06:08.926 --rc genhtml_function_coverage=1 00:06:08.926 --rc genhtml_legend=1 00:06:08.926 --rc geninfo_all_blocks=1 00:06:08.926 --rc geninfo_unexecuted_blocks=1 00:06:08.926 00:06:08.926 ' 00:06:08.926 08:17:10 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:08.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.926 --rc genhtml_branch_coverage=1 00:06:08.926 --rc genhtml_function_coverage=1 00:06:08.926 --rc genhtml_legend=1 00:06:08.926 --rc geninfo_all_blocks=1 00:06:08.926 --rc geninfo_unexecuted_blocks=1 00:06:08.926 00:06:08.926 ' 00:06:08.926 08:17:10 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:08.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.926 --rc genhtml_branch_coverage=1 00:06:08.926 --rc genhtml_function_coverage=1 00:06:08.926 --rc genhtml_legend=1 00:06:08.926 --rc geninfo_all_blocks=1 00:06:08.926 --rc geninfo_unexecuted_blocks=1 00:06:08.926 00:06:08.926 ' 00:06:08.926 08:17:10 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:08.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.926 --rc genhtml_branch_coverage=1 00:06:08.926 --rc genhtml_function_coverage=1 00:06:08.926 --rc genhtml_legend=1 00:06:08.926 --rc geninfo_all_blocks=1 00:06:08.926 --rc geninfo_unexecuted_blocks=1 00:06:08.926 00:06:08.926 ' 00:06:08.926 08:17:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:08.926 08:17:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59535 00:06:08.926 08:17:10 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:08.926 08:17:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59535 00:06:08.926 08:17:10 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59535 ']' 00:06:08.926 08:17:10 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.926 08:17:10 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.926 08:17:10 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.926 08:17:10 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.926 08:17:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:09.184 [2024-10-15 08:17:10.664621] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:09.184 [2024-10-15 08:17:10.664952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59535 ] 00:06:09.184 [2024-10-15 08:17:10.803689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.185 [2024-10-15 08:17:10.891020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.442 [2024-10-15 08:17:10.989013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.008 08:17:11 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.008 08:17:11 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:10.008 08:17:11 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:10.266 { 00:06:10.266 "version": "SPDK v25.01-pre git sha1 30f8ce7c5", 00:06:10.266 "fields": { 00:06:10.266 "major": 25, 00:06:10.266 "minor": 1, 00:06:10.266 "patch": 0, 00:06:10.266 "suffix": "-pre", 00:06:10.266 "commit": "30f8ce7c5" 00:06:10.266 } 00:06:10.266 } 00:06:10.266 08:17:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:10.266 08:17:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:10.266 08:17:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:10.266 08:17:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:10.266 08:17:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:10.266 08:17:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:10.266 08:17:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:10.266 08:17:11 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.266 08:17:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:10.266 08:17:11 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.266 08:17:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:10.266 08:17:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:10.266 08:17:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.266 08:17:11 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:10.266 08:17:11 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.266 08:17:11 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:10.266 08:17:11 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.266 08:17:11 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:10.524 08:17:11 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.524 08:17:11 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:10.524 08:17:11 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.524 08:17:11 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:10.524 08:17:11 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:10.524 08:17:11 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.783 request: 00:06:10.783 { 00:06:10.783 "method": "env_dpdk_get_mem_stats", 00:06:10.783 "req_id": 1 00:06:10.783 } 00:06:10.783 Got JSON-RPC error response 00:06:10.783 response: 00:06:10.783 { 00:06:10.783 "code": -32601, 00:06:10.783 "message": "Method not found" 00:06:10.783 } 00:06:10.783 08:17:12 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:10.783 08:17:12 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.783 08:17:12 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:10.783 08:17:12 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.783 08:17:12 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59535 00:06:10.783 08:17:12 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59535 ']' 00:06:10.783 08:17:12 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59535 00:06:10.783 08:17:12 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:10.783 08:17:12 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.783 08:17:12 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59535 00:06:10.783 killing process with pid 59535 00:06:10.783 08:17:12 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.783 08:17:12 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.783 08:17:12 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59535' 00:06:10.783 08:17:12 app_cmdline -- common/autotest_common.sh@969 -- # kill 59535 00:06:10.783 08:17:12 app_cmdline -- common/autotest_common.sh@974 -- # wait 59535 00:06:11.350 00:06:11.350 real 0m2.447s 00:06:11.350 user 0m2.944s 00:06:11.350 sys 0m0.588s 00:06:11.350 ************************************ 00:06:11.350 END TEST app_cmdline 00:06:11.350 ************************************ 00:06:11.350 08:17:12 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.350 08:17:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:11.350 08:17:12 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:11.350 08:17:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.350 08:17:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.350 08:17:12 -- common/autotest_common.sh@10 -- # set +x 00:06:11.350 ************************************ 00:06:11.350 START TEST version 00:06:11.350 ************************************ 00:06:11.350 08:17:12 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:11.350 * Looking for test storage... 00:06:11.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:11.350 08:17:12 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:11.350 08:17:12 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:11.350 08:17:12 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:11.350 08:17:13 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:11.350 08:17:13 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.350 08:17:13 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.350 08:17:13 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.350 08:17:13 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.350 08:17:13 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.350 08:17:13 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.350 08:17:13 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.350 08:17:13 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.350 08:17:13 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.350 08:17:13 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.350 08:17:13 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.350 08:17:13 version -- scripts/common.sh@344 -- # case "$op" in 00:06:11.350 08:17:13 version -- scripts/common.sh@345 -- # : 1 00:06:11.350 08:17:13 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.350 08:17:13 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.350 08:17:13 version -- scripts/common.sh@365 -- # decimal 1 00:06:11.350 08:17:13 version -- scripts/common.sh@353 -- # local d=1 00:06:11.350 08:17:13 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.350 08:17:13 version -- scripts/common.sh@355 -- # echo 1 00:06:11.350 08:17:13 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.350 08:17:13 version -- scripts/common.sh@366 -- # decimal 2 00:06:11.350 08:17:13 version -- scripts/common.sh@353 -- # local d=2 00:06:11.350 08:17:13 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.350 08:17:13 version -- scripts/common.sh@355 -- # echo 2 00:06:11.350 08:17:13 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.350 08:17:13 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.350 08:17:13 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.350 08:17:13 version -- scripts/common.sh@368 -- # return 0 00:06:11.350 08:17:13 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.350 08:17:13 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:11.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.350 --rc genhtml_branch_coverage=1 00:06:11.350 --rc genhtml_function_coverage=1 00:06:11.350 --rc genhtml_legend=1 00:06:11.350 --rc geninfo_all_blocks=1 00:06:11.350 --rc geninfo_unexecuted_blocks=1 00:06:11.350 00:06:11.350 ' 00:06:11.350 08:17:13 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:11.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.350 --rc genhtml_branch_coverage=1 00:06:11.350 --rc genhtml_function_coverage=1 00:06:11.350 --rc genhtml_legend=1 00:06:11.350 --rc geninfo_all_blocks=1 00:06:11.350 --rc geninfo_unexecuted_blocks=1 00:06:11.350 00:06:11.350 ' 00:06:11.350 08:17:13 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:11.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.351 --rc genhtml_branch_coverage=1 00:06:11.351 --rc genhtml_function_coverage=1 00:06:11.351 --rc genhtml_legend=1 00:06:11.351 --rc geninfo_all_blocks=1 00:06:11.351 --rc geninfo_unexecuted_blocks=1 00:06:11.351 00:06:11.351 ' 00:06:11.351 08:17:13 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:11.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.351 --rc genhtml_branch_coverage=1 00:06:11.351 --rc genhtml_function_coverage=1 00:06:11.351 --rc genhtml_legend=1 00:06:11.351 --rc geninfo_all_blocks=1 00:06:11.351 --rc geninfo_unexecuted_blocks=1 00:06:11.351 00:06:11.351 ' 00:06:11.351 08:17:13 version -- app/version.sh@17 -- # get_header_version major 00:06:11.351 08:17:13 version -- app/version.sh@14 -- # cut -f2 00:06:11.351 08:17:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:11.351 08:17:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:11.351 08:17:13 version -- app/version.sh@17 -- # major=25 00:06:11.351 08:17:13 version -- app/version.sh@18 -- # get_header_version minor 00:06:11.351 08:17:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:11.351 08:17:13 version -- app/version.sh@14 -- # cut -f2 00:06:11.351 08:17:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:11.610 08:17:13 version -- app/version.sh@18 -- # minor=1 00:06:11.610 08:17:13 version -- app/version.sh@19 -- # get_header_version patch 00:06:11.610 08:17:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:11.610 08:17:13 version -- app/version.sh@14 -- # cut -f2 00:06:11.610 08:17:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:11.610 08:17:13 version -- app/version.sh@19 -- # patch=0 00:06:11.610 08:17:13 version -- app/version.sh@20 -- # get_header_version suffix 00:06:11.610 08:17:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:11.610 08:17:13 version -- app/version.sh@14 -- # cut -f2 00:06:11.610 08:17:13 version -- app/version.sh@14 -- # tr -d '"' 00:06:11.610 08:17:13 version -- app/version.sh@20 -- # suffix=-pre 00:06:11.610 08:17:13 version -- app/version.sh@22 -- # version=25.1 00:06:11.610 08:17:13 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:11.610 08:17:13 version -- app/version.sh@28 -- # version=25.1rc0 00:06:11.610 08:17:13 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:11.610 08:17:13 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:11.610 08:17:13 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:11.610 08:17:13 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:11.610 00:06:11.610 real 0m0.243s 00:06:11.610 user 0m0.157s 00:06:11.610 sys 0m0.120s 00:06:11.610 ************************************ 00:06:11.610 END TEST version 00:06:11.610 ************************************ 00:06:11.610 08:17:13 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.610 08:17:13 version -- common/autotest_common.sh@10 -- # set +x 00:06:11.610 08:17:13 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:11.610 08:17:13 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:11.610 08:17:13 -- spdk/autotest.sh@194 -- # uname -s 00:06:11.610 08:17:13 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:11.610 08:17:13 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:11.610 08:17:13 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:06:11.610 08:17:13 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:06:11.610 08:17:13 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:11.610 08:17:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.610 08:17:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.610 08:17:13 -- common/autotest_common.sh@10 -- # set +x 00:06:11.610 ************************************ 00:06:11.610 START TEST spdk_dd 00:06:11.610 ************************************ 00:06:11.610 08:17:13 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:11.610 * Looking for test storage... 00:06:11.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:11.610 08:17:13 spdk_dd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:11.610 08:17:13 spdk_dd -- common/autotest_common.sh@1691 -- # lcov --version 00:06:11.610 08:17:13 spdk_dd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:11.872 08:17:13 spdk_dd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@345 -- # : 1 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@368 -- # return 0 00:06:11.872 08:17:13 spdk_dd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.872 08:17:13 spdk_dd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:11.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.872 --rc genhtml_branch_coverage=1 00:06:11.872 --rc genhtml_function_coverage=1 00:06:11.872 --rc genhtml_legend=1 00:06:11.872 --rc geninfo_all_blocks=1 00:06:11.872 --rc geninfo_unexecuted_blocks=1 00:06:11.872 00:06:11.872 ' 00:06:11.872 08:17:13 spdk_dd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:11.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.872 --rc genhtml_branch_coverage=1 00:06:11.872 --rc genhtml_function_coverage=1 00:06:11.872 --rc genhtml_legend=1 00:06:11.872 --rc geninfo_all_blocks=1 00:06:11.872 --rc geninfo_unexecuted_blocks=1 00:06:11.872 00:06:11.872 ' 00:06:11.872 08:17:13 spdk_dd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:11.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.872 --rc genhtml_branch_coverage=1 00:06:11.872 --rc genhtml_function_coverage=1 00:06:11.872 --rc genhtml_legend=1 00:06:11.872 --rc geninfo_all_blocks=1 00:06:11.872 --rc geninfo_unexecuted_blocks=1 00:06:11.872 00:06:11.872 ' 00:06:11.872 08:17:13 spdk_dd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:11.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.872 --rc genhtml_branch_coverage=1 00:06:11.872 --rc genhtml_function_coverage=1 00:06:11.872 --rc genhtml_legend=1 00:06:11.872 --rc geninfo_all_blocks=1 00:06:11.872 --rc geninfo_unexecuted_blocks=1 00:06:11.872 00:06:11.872 ' 00:06:11.872 08:17:13 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.872 08:17:13 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.872 08:17:13 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.872 08:17:13 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.872 08:17:13 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.872 08:17:13 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:11.872 08:17:13 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.872 08:17:13 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:12.131 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:12.131 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:12.131 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:12.131 08:17:13 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:12.131 08:17:13 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@233 -- # local class 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@235 -- # local progif 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@236 -- # class=01 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:06:12.131 08:17:13 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:12.131 08:17:13 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:12.131 08:17:13 spdk_dd -- dd/common.sh@139 -- # local lib 00:06:12.131 08:17:13 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:12.131 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.131 08:17:13 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.131 08:17:13 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:06:12.131 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:12.131 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.131 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:12.131 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.131 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:12.131 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.131 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:12.131 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.131 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:12.131 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.131 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:12.131 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.131 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:12.131 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.131 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:12.131 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.14.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.132 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.1.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.2 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.465 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:12.466 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.466 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:12.466 08:17:13 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:12.466 08:17:13 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:12.466 08:17:13 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:12.466 * spdk_dd linked to liburing 00:06:12.466 08:17:13 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:12.466 08:17:13 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@75 -- # CONFIG_FC=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:06:12.466 08:17:13 spdk_dd -- common/build_config.sh@89 -- # CONFIG_URING=y 00:06:12.466 08:17:13 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:12.466 08:17:13 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:06:12.466 08:17:13 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:06:12.466 08:17:13 spdk_dd -- dd/common.sh@153 -- # return 0 00:06:12.466 08:17:13 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:12.466 08:17:13 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:12.466 08:17:13 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:12.466 08:17:13 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.466 08:17:13 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:12.466 ************************************ 00:06:12.466 START TEST spdk_dd_basic_rw 00:06:12.466 ************************************ 00:06:12.466 08:17:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:12.466 * Looking for test storage... 00:06:12.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:12.466 08:17:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:12.466 08:17:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:12.466 08:17:13 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lcov --version 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:12.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.466 --rc genhtml_branch_coverage=1 00:06:12.466 --rc genhtml_function_coverage=1 00:06:12.466 --rc genhtml_legend=1 00:06:12.466 --rc geninfo_all_blocks=1 00:06:12.466 --rc geninfo_unexecuted_blocks=1 00:06:12.466 00:06:12.466 ' 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:12.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.466 --rc genhtml_branch_coverage=1 00:06:12.466 --rc genhtml_function_coverage=1 00:06:12.466 --rc genhtml_legend=1 00:06:12.466 --rc geninfo_all_blocks=1 00:06:12.466 --rc geninfo_unexecuted_blocks=1 00:06:12.466 00:06:12.466 ' 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:12.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.466 --rc genhtml_branch_coverage=1 00:06:12.466 --rc genhtml_function_coverage=1 00:06:12.466 --rc genhtml_legend=1 00:06:12.466 --rc geninfo_all_blocks=1 00:06:12.466 --rc geninfo_unexecuted_blocks=1 00:06:12.466 00:06:12.466 ' 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:12.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.466 --rc genhtml_branch_coverage=1 00:06:12.466 --rc genhtml_function_coverage=1 00:06:12.466 --rc genhtml_legend=1 00:06:12.466 --rc geninfo_all_blocks=1 00:06:12.466 --rc geninfo_unexecuted_blocks=1 00:06:12.466 00:06:12.466 ' 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.466 08:17:14 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.467 08:17:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.467 08:17:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.467 08:17:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.467 08:17:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:12.467 08:17:14 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.467 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:12.467 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:12.467 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:12.467 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:12.467 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:12.467 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:12.467 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:12.467 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:12.467 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:12.467 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:12.467 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:12.467 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:12.467 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:12.731 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:12.731 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.732 ************************************ 00:06:12.732 START TEST dd_bs_lt_native_bs 00:06:12.732 ************************************ 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:12.732 08:17:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:12.732 { 00:06:12.732 "subsystems": [ 00:06:12.732 { 00:06:12.732 "subsystem": "bdev", 00:06:12.732 "config": [ 00:06:12.732 { 00:06:12.732 "params": { 00:06:12.732 "trtype": "pcie", 00:06:12.732 "traddr": "0000:00:10.0", 00:06:12.732 "name": "Nvme0" 00:06:12.732 }, 00:06:12.732 "method": "bdev_nvme_attach_controller" 00:06:12.732 }, 00:06:12.732 { 00:06:12.732 "method": "bdev_wait_for_examine" 00:06:12.732 } 00:06:12.732 ] 00:06:12.732 } 00:06:12.732 ] 00:06:12.732 } 00:06:12.732 [2024-10-15 08:17:14.360016] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:12.732 [2024-10-15 08:17:14.361035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59881 ] 00:06:12.992 [2024-10-15 08:17:14.494805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.992 [2024-10-15 08:17:14.594665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.992 [2024-10-15 08:17:14.678549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.251 [2024-10-15 08:17:14.806242] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:13.251 [2024-10-15 08:17:14.806348] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:13.510 [2024-10-15 08:17:14.985960] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:13.510 00:06:13.510 real 0m0.762s 00:06:13.510 user 0m0.512s 00:06:13.510 sys 0m0.195s 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:13.510 ************************************ 00:06:13.510 END TEST dd_bs_lt_native_bs 00:06:13.510 ************************************ 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.510 ************************************ 00:06:13.510 START TEST dd_rw 00:06:13.510 ************************************ 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:13.510 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.079 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:14.079 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:14.079 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:14.079 08:17:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.079 [2024-10-15 08:17:15.755738] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:14.079 [2024-10-15 08:17:15.756046] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59915 ] 00:06:14.079 { 00:06:14.079 "subsystems": [ 00:06:14.079 { 00:06:14.079 "subsystem": "bdev", 00:06:14.079 "config": [ 00:06:14.079 { 00:06:14.079 "params": { 00:06:14.079 "trtype": "pcie", 00:06:14.079 "traddr": "0000:00:10.0", 00:06:14.079 "name": "Nvme0" 00:06:14.079 }, 00:06:14.079 "method": "bdev_nvme_attach_controller" 00:06:14.079 }, 00:06:14.079 { 00:06:14.079 "method": "bdev_wait_for_examine" 00:06:14.079 } 00:06:14.079 ] 00:06:14.079 } 00:06:14.079 ] 00:06:14.079 } 00:06:14.337 [2024-10-15 08:17:15.893374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.337 [2024-10-15 08:17:15.976209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.337 [2024-10-15 08:17:16.059960] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.596  [2024-10-15T08:17:16.586Z] Copying: 60/60 [kB] (average 29 MBps) 00:06:14.855 00:06:14.855 08:17:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:14.855 08:17:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:14.856 08:17:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:14.856 08:17:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.856 { 00:06:14.856 "subsystems": [ 00:06:14.856 { 00:06:14.856 "subsystem": "bdev", 00:06:14.856 "config": [ 00:06:14.856 { 00:06:14.856 "params": { 00:06:14.856 "trtype": "pcie", 00:06:14.856 "traddr": "0000:00:10.0", 00:06:14.856 "name": "Nvme0" 00:06:14.856 }, 00:06:14.856 "method": "bdev_nvme_attach_controller" 00:06:14.856 }, 00:06:14.856 { 00:06:14.856 "method": "bdev_wait_for_examine" 00:06:14.856 } 00:06:14.856 ] 00:06:14.856 } 00:06:14.856 ] 00:06:14.856 } 00:06:14.856 [2024-10-15 08:17:16.501266] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:14.856 [2024-10-15 08:17:16.501421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59933 ] 00:06:15.115 [2024-10-15 08:17:16.644015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.115 [2024-10-15 08:17:16.723498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.115 [2024-10-15 08:17:16.796766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.373  [2024-10-15T08:17:17.363Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:15.632 00:06:15.632 08:17:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:15.632 08:17:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:15.632 08:17:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:15.632 08:17:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:15.632 08:17:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:15.632 08:17:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:15.632 08:17:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:15.632 08:17:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:15.632 08:17:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:15.632 08:17:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:15.632 08:17:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.632 [2024-10-15 08:17:17.230213] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:15.632 [2024-10-15 08:17:17.230316] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59954 ] 00:06:15.632 { 00:06:15.632 "subsystems": [ 00:06:15.632 { 00:06:15.632 "subsystem": "bdev", 00:06:15.632 "config": [ 00:06:15.632 { 00:06:15.632 "params": { 00:06:15.632 "trtype": "pcie", 00:06:15.632 "traddr": "0000:00:10.0", 00:06:15.632 "name": "Nvme0" 00:06:15.632 }, 00:06:15.632 "method": "bdev_nvme_attach_controller" 00:06:15.632 }, 00:06:15.632 { 00:06:15.632 "method": "bdev_wait_for_examine" 00:06:15.632 } 00:06:15.632 ] 00:06:15.632 } 00:06:15.632 ] 00:06:15.632 } 00:06:15.919 [2024-10-15 08:17:17.364924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.919 [2024-10-15 08:17:17.439414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.919 [2024-10-15 08:17:17.510275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.197  [2024-10-15T08:17:17.928Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:16.197 00:06:16.197 08:17:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:16.197 08:17:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:16.197 08:17:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:16.197 08:17:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:16.197 08:17:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:16.197 08:17:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:16.197 08:17:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.131 08:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:17.131 08:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:17.131 08:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:17.131 08:17:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.131 [2024-10-15 08:17:18.579637] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:17.131 [2024-10-15 08:17:18.580042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59973 ] 00:06:17.131 { 00:06:17.131 "subsystems": [ 00:06:17.131 { 00:06:17.131 "subsystem": "bdev", 00:06:17.131 "config": [ 00:06:17.131 { 00:06:17.131 "params": { 00:06:17.131 "trtype": "pcie", 00:06:17.131 "traddr": "0000:00:10.0", 00:06:17.131 "name": "Nvme0" 00:06:17.131 }, 00:06:17.131 "method": "bdev_nvme_attach_controller" 00:06:17.131 }, 00:06:17.131 { 00:06:17.131 "method": "bdev_wait_for_examine" 00:06:17.131 } 00:06:17.131 ] 00:06:17.131 } 00:06:17.131 ] 00:06:17.131 } 00:06:17.131 [2024-10-15 08:17:18.719673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.131 [2024-10-15 08:17:18.796996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.388 [2024-10-15 08:17:18.867298] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.388  [2024-10-15T08:17:19.378Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:17.647 00:06:17.647 08:17:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:17.647 08:17:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:17.647 08:17:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:17.647 08:17:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.647 { 00:06:17.647 "subsystems": [ 00:06:17.647 { 00:06:17.647 "subsystem": "bdev", 00:06:17.647 "config": [ 00:06:17.647 { 00:06:17.647 "params": { 00:06:17.647 "trtype": "pcie", 00:06:17.647 "traddr": "0000:00:10.0", 00:06:17.647 "name": "Nvme0" 00:06:17.647 }, 00:06:17.647 "method": "bdev_nvme_attach_controller" 00:06:17.647 }, 00:06:17.647 { 00:06:17.647 "method": "bdev_wait_for_examine" 00:06:17.647 } 00:06:17.647 ] 00:06:17.647 } 00:06:17.647 ] 00:06:17.647 } 00:06:17.647 [2024-10-15 08:17:19.306141] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:17.647 [2024-10-15 08:17:19.306270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59992 ] 00:06:17.905 [2024-10-15 08:17:19.449636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.905 [2024-10-15 08:17:19.526275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.905 [2024-10-15 08:17:19.596752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.164  [2024-10-15T08:17:20.154Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:18.423 00:06:18.423 08:17:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.423 08:17:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:18.423 08:17:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:18.423 08:17:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:18.423 08:17:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:18.423 08:17:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:18.423 08:17:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:18.423 08:17:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:18.423 08:17:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:18.423 08:17:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:18.423 08:17:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:18.423 { 00:06:18.423 "subsystems": [ 00:06:18.423 { 00:06:18.423 "subsystem": "bdev", 00:06:18.423 "config": [ 00:06:18.423 { 00:06:18.423 "params": { 00:06:18.423 "trtype": "pcie", 00:06:18.423 "traddr": "0000:00:10.0", 00:06:18.423 "name": "Nvme0" 00:06:18.423 }, 00:06:18.423 "method": "bdev_nvme_attach_controller" 00:06:18.423 }, 00:06:18.423 { 00:06:18.423 "method": "bdev_wait_for_examine" 00:06:18.423 } 00:06:18.423 ] 00:06:18.423 } 00:06:18.423 ] 00:06:18.423 } 00:06:18.423 [2024-10-15 08:17:20.043688] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:18.423 [2024-10-15 08:17:20.043785] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60012 ] 00:06:18.682 [2024-10-15 08:17:20.179263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.682 [2024-10-15 08:17:20.256416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.682 [2024-10-15 08:17:20.328311] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.941  [2024-10-15T08:17:20.930Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:19.199 00:06:19.199 08:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:19.199 08:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:19.199 08:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:19.199 08:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:19.199 08:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:19.199 08:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:19.199 08:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:19.199 08:17:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.766 08:17:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:19.766 08:17:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:19.766 08:17:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:19.766 08:17:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.766 { 00:06:19.766 "subsystems": [ 00:06:19.766 { 00:06:19.766 "subsystem": "bdev", 00:06:19.766 "config": [ 00:06:19.766 { 00:06:19.766 "params": { 00:06:19.766 "trtype": "pcie", 00:06:19.766 "traddr": "0000:00:10.0", 00:06:19.766 "name": "Nvme0" 00:06:19.766 }, 00:06:19.766 "method": "bdev_nvme_attach_controller" 00:06:19.766 }, 00:06:19.766 { 00:06:19.766 "method": "bdev_wait_for_examine" 00:06:19.766 } 00:06:19.766 ] 00:06:19.766 } 00:06:19.766 ] 00:06:19.766 } 00:06:19.766 [2024-10-15 08:17:21.338844] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:19.766 [2024-10-15 08:17:21.339205] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60032 ] 00:06:19.766 [2024-10-15 08:17:21.479303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.025 [2024-10-15 08:17:21.556779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.025 [2024-10-15 08:17:21.629096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.284  [2024-10-15T08:17:22.015Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:20.284 00:06:20.284 08:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:20.284 08:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:20.284 08:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:20.284 08:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:20.558 [2024-10-15 08:17:22.072602] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:20.558 [2024-10-15 08:17:22.072738] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60047 ] 00:06:20.558 { 00:06:20.558 "subsystems": [ 00:06:20.558 { 00:06:20.558 "subsystem": "bdev", 00:06:20.558 "config": [ 00:06:20.558 { 00:06:20.558 "params": { 00:06:20.558 "trtype": "pcie", 00:06:20.558 "traddr": "0000:00:10.0", 00:06:20.558 "name": "Nvme0" 00:06:20.558 }, 00:06:20.558 "method": "bdev_nvme_attach_controller" 00:06:20.558 }, 00:06:20.558 { 00:06:20.558 "method": "bdev_wait_for_examine" 00:06:20.558 } 00:06:20.558 ] 00:06:20.558 } 00:06:20.558 ] 00:06:20.558 } 00:06:20.558 [2024-10-15 08:17:22.210318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.817 [2024-10-15 08:17:22.289478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.817 [2024-10-15 08:17:22.362088] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.817  [2024-10-15T08:17:22.807Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:21.076 00:06:21.076 08:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:21.076 08:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:21.076 08:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:21.076 08:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:21.076 08:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:21.076 08:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:21.076 08:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:21.076 08:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:21.076 08:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:21.076 08:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:21.076 08:17:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:21.076 { 00:06:21.076 "subsystems": [ 00:06:21.076 { 00:06:21.076 "subsystem": "bdev", 00:06:21.076 "config": [ 00:06:21.076 { 00:06:21.076 "params": { 00:06:21.076 "trtype": "pcie", 00:06:21.076 "traddr": "0000:00:10.0", 00:06:21.076 "name": "Nvme0" 00:06:21.076 }, 00:06:21.076 "method": "bdev_nvme_attach_controller" 00:06:21.076 }, 00:06:21.076 { 00:06:21.076 "method": "bdev_wait_for_examine" 00:06:21.076 } 00:06:21.076 ] 00:06:21.076 } 00:06:21.076 ] 00:06:21.076 } 00:06:21.076 [2024-10-15 08:17:22.802032] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:21.076 [2024-10-15 08:17:22.802344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60062 ] 00:06:21.335 [2024-10-15 08:17:22.938673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.335 [2024-10-15 08:17:23.018054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.593 [2024-10-15 08:17:23.092005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.593  [2024-10-15T08:17:23.583Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:21.852 00:06:21.852 08:17:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:21.852 08:17:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:21.852 08:17:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:21.852 08:17:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:21.853 08:17:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:21.853 08:17:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:21.853 08:17:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:22.420 08:17:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:22.420 08:17:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:22.420 08:17:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:22.420 08:17:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:22.420 { 00:06:22.420 "subsystems": [ 00:06:22.420 { 00:06:22.420 "subsystem": "bdev", 00:06:22.420 "config": [ 00:06:22.420 { 00:06:22.420 "params": { 00:06:22.420 "trtype": "pcie", 00:06:22.420 "traddr": "0000:00:10.0", 00:06:22.420 "name": "Nvme0" 00:06:22.420 }, 00:06:22.420 "method": "bdev_nvme_attach_controller" 00:06:22.420 }, 00:06:22.420 { 00:06:22.420 "method": "bdev_wait_for_examine" 00:06:22.420 } 00:06:22.420 ] 00:06:22.420 } 00:06:22.420 ] 00:06:22.420 } 00:06:22.420 [2024-10-15 08:17:24.097911] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:22.420 [2024-10-15 08:17:24.098035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60087 ] 00:06:22.678 [2024-10-15 08:17:24.238485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.678 [2024-10-15 08:17:24.307768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.678 [2024-10-15 08:17:24.384514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.937  [2024-10-15T08:17:24.926Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:23.195 00:06:23.195 08:17:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:23.195 08:17:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:23.195 08:17:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:23.195 08:17:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:23.195 { 00:06:23.195 "subsystems": [ 00:06:23.195 { 00:06:23.195 "subsystem": "bdev", 00:06:23.195 "config": [ 00:06:23.195 { 00:06:23.195 "params": { 00:06:23.195 "trtype": "pcie", 00:06:23.195 "traddr": "0000:00:10.0", 00:06:23.195 "name": "Nvme0" 00:06:23.195 }, 00:06:23.195 "method": "bdev_nvme_attach_controller" 00:06:23.195 }, 00:06:23.195 { 00:06:23.195 "method": "bdev_wait_for_examine" 00:06:23.195 } 00:06:23.195 ] 00:06:23.195 } 00:06:23.195 ] 00:06:23.195 } 00:06:23.195 [2024-10-15 08:17:24.831186] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:23.195 [2024-10-15 08:17:24.831303] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60100 ] 00:06:23.453 [2024-10-15 08:17:24.969074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.453 [2024-10-15 08:17:25.048561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.453 [2024-10-15 08:17:25.122029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.733  [2024-10-15T08:17:25.723Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:23.992 00:06:23.992 08:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.992 08:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:23.992 08:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:23.992 08:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:23.992 08:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:23.992 08:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:23.992 08:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:23.992 08:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:23.992 08:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:23.992 08:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:23.992 08:17:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:23.992 [2024-10-15 08:17:25.584748] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:23.992 [2024-10-15 08:17:25.584874] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60121 ] 00:06:23.992 { 00:06:23.992 "subsystems": [ 00:06:23.992 { 00:06:23.992 "subsystem": "bdev", 00:06:23.992 "config": [ 00:06:23.992 { 00:06:23.992 "params": { 00:06:23.992 "trtype": "pcie", 00:06:23.992 "traddr": "0000:00:10.0", 00:06:23.992 "name": "Nvme0" 00:06:23.992 }, 00:06:23.992 "method": "bdev_nvme_attach_controller" 00:06:23.992 }, 00:06:23.992 { 00:06:23.992 "method": "bdev_wait_for_examine" 00:06:23.992 } 00:06:23.992 ] 00:06:23.992 } 00:06:23.992 ] 00:06:23.992 } 00:06:24.250 [2024-10-15 08:17:25.725296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.250 [2024-10-15 08:17:25.803808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.250 [2024-10-15 08:17:25.876874] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.508  [2024-10-15T08:17:26.497Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:24.766 00:06:24.766 08:17:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:24.766 08:17:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:24.766 08:17:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:24.766 08:17:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:24.766 08:17:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:24.766 08:17:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:24.766 08:17:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:24.766 08:17:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:25.335 08:17:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:25.335 08:17:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:25.335 08:17:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:25.335 08:17:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:25.335 { 00:06:25.335 "subsystems": [ 00:06:25.335 { 00:06:25.335 "subsystem": "bdev", 00:06:25.335 "config": [ 00:06:25.335 { 00:06:25.335 "params": { 00:06:25.335 "trtype": "pcie", 00:06:25.335 "traddr": "0000:00:10.0", 00:06:25.335 "name": "Nvme0" 00:06:25.335 }, 00:06:25.335 "method": "bdev_nvme_attach_controller" 00:06:25.335 }, 00:06:25.335 { 00:06:25.335 "method": "bdev_wait_for_examine" 00:06:25.335 } 00:06:25.335 ] 00:06:25.335 } 00:06:25.335 ] 00:06:25.335 } 00:06:25.335 [2024-10-15 08:17:26.823921] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:25.335 [2024-10-15 08:17:26.824174] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60140 ] 00:06:25.335 [2024-10-15 08:17:26.957343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.335 [2024-10-15 08:17:27.043275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.594 [2024-10-15 08:17:27.121507] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.594  [2024-10-15T08:17:27.584Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:25.853 00:06:25.853 08:17:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:25.853 08:17:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:25.853 08:17:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:25.853 08:17:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:25.853 [2024-10-15 08:17:27.564473] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:25.853 [2024-10-15 08:17:27.564570] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60159 ] 00:06:25.853 { 00:06:25.853 "subsystems": [ 00:06:25.853 { 00:06:25.853 "subsystem": "bdev", 00:06:25.853 "config": [ 00:06:25.853 { 00:06:25.853 "params": { 00:06:25.853 "trtype": "pcie", 00:06:25.853 "traddr": "0000:00:10.0", 00:06:25.853 "name": "Nvme0" 00:06:25.853 }, 00:06:25.853 "method": "bdev_nvme_attach_controller" 00:06:25.853 }, 00:06:25.853 { 00:06:25.853 "method": "bdev_wait_for_examine" 00:06:25.853 } 00:06:25.853 ] 00:06:25.853 } 00:06:25.853 ] 00:06:25.853 } 00:06:26.111 [2024-10-15 08:17:27.702900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.111 [2024-10-15 08:17:27.783262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.368 [2024-10-15 08:17:27.855884] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.368  [2024-10-15T08:17:28.357Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:26.626 00:06:26.626 08:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:26.626 08:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:26.626 08:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:26.626 08:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:26.626 08:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:26.626 08:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:26.626 08:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:26.626 08:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:26.626 08:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:26.626 08:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:26.626 08:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:26.626 { 00:06:26.626 "subsystems": [ 00:06:26.626 { 00:06:26.626 "subsystem": "bdev", 00:06:26.626 "config": [ 00:06:26.626 { 00:06:26.626 "params": { 00:06:26.626 "trtype": "pcie", 00:06:26.626 "traddr": "0000:00:10.0", 00:06:26.626 "name": "Nvme0" 00:06:26.626 }, 00:06:26.626 "method": "bdev_nvme_attach_controller" 00:06:26.626 }, 00:06:26.626 { 00:06:26.626 "method": "bdev_wait_for_examine" 00:06:26.626 } 00:06:26.626 ] 00:06:26.626 } 00:06:26.626 ] 00:06:26.626 } 00:06:26.626 [2024-10-15 08:17:28.312185] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:26.626 [2024-10-15 08:17:28.312347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60175 ] 00:06:26.884 [2024-10-15 08:17:28.447310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.884 [2024-10-15 08:17:28.522952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.884 [2024-10-15 08:17:28.595889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.156  [2024-10-15T08:17:29.155Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:27.424 00:06:27.424 08:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:27.424 08:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:27.424 08:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:27.424 08:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:27.424 08:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:27.424 08:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:27.424 08:17:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:27.992 08:17:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:27.992 08:17:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:27.992 08:17:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:27.992 08:17:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:27.992 { 00:06:27.992 "subsystems": [ 00:06:27.992 { 00:06:27.992 "subsystem": "bdev", 00:06:27.992 "config": [ 00:06:27.992 { 00:06:27.992 "params": { 00:06:27.992 "trtype": "pcie", 00:06:27.992 "traddr": "0000:00:10.0", 00:06:27.992 "name": "Nvme0" 00:06:27.992 }, 00:06:27.992 "method": "bdev_nvme_attach_controller" 00:06:27.992 }, 00:06:27.992 { 00:06:27.992 "method": "bdev_wait_for_examine" 00:06:27.992 } 00:06:27.992 ] 00:06:27.992 } 00:06:27.992 ] 00:06:27.992 } 00:06:27.992 [2024-10-15 08:17:29.544580] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:27.992 [2024-10-15 08:17:29.544908] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60199 ] 00:06:27.992 [2024-10-15 08:17:29.685820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.250 [2024-10-15 08:17:29.766204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.251 [2024-10-15 08:17:29.841801] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.251  [2024-10-15T08:17:30.240Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:28.509 00:06:28.509 08:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:28.509 08:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:28.509 08:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:28.509 08:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:28.768 { 00:06:28.768 "subsystems": [ 00:06:28.768 { 00:06:28.768 "subsystem": "bdev", 00:06:28.768 "config": [ 00:06:28.768 { 00:06:28.768 "params": { 00:06:28.768 "trtype": "pcie", 00:06:28.768 "traddr": "0000:00:10.0", 00:06:28.768 "name": "Nvme0" 00:06:28.768 }, 00:06:28.768 "method": "bdev_nvme_attach_controller" 00:06:28.768 }, 00:06:28.768 { 00:06:28.768 "method": "bdev_wait_for_examine" 00:06:28.768 } 00:06:28.768 ] 00:06:28.768 } 00:06:28.768 ] 00:06:28.768 } 00:06:28.768 [2024-10-15 08:17:30.289644] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:28.768 [2024-10-15 08:17:30.289767] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60213 ] 00:06:28.768 [2024-10-15 08:17:30.430267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.027 [2024-10-15 08:17:30.511281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.027 [2024-10-15 08:17:30.585411] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.027  [2024-10-15T08:17:31.016Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:29.285 00:06:29.285 08:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:29.285 08:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:29.285 08:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:29.285 08:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:29.285 08:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:29.285 08:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:29.285 08:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:29.285 08:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:29.285 08:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:29.285 08:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:29.285 08:17:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:29.543 { 00:06:29.543 "subsystems": [ 00:06:29.543 { 00:06:29.543 "subsystem": "bdev", 00:06:29.543 "config": [ 00:06:29.543 { 00:06:29.543 "params": { 00:06:29.543 "trtype": "pcie", 00:06:29.543 "traddr": "0000:00:10.0", 00:06:29.543 "name": "Nvme0" 00:06:29.543 }, 00:06:29.543 "method": "bdev_nvme_attach_controller" 00:06:29.543 }, 00:06:29.543 { 00:06:29.543 "method": "bdev_wait_for_examine" 00:06:29.543 } 00:06:29.543 ] 00:06:29.543 } 00:06:29.543 ] 00:06:29.543 } 00:06:29.543 [2024-10-15 08:17:31.040762] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:29.543 [2024-10-15 08:17:31.040866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60228 ] 00:06:29.543 [2024-10-15 08:17:31.182770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.543 [2024-10-15 08:17:31.267640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.802 [2024-10-15 08:17:31.344789] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.802  [2024-10-15T08:17:31.791Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:30.060 00:06:30.060 ************************************ 00:06:30.060 END TEST dd_rw 00:06:30.060 ************************************ 00:06:30.060 00:06:30.060 real 0m16.620s 00:06:30.060 user 0m12.083s 00:06:30.060 sys 0m6.869s 00:06:30.060 08:17:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.060 08:17:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.060 08:17:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:30.060 08:17:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.060 08:17:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.060 08:17:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.319 ************************************ 00:06:30.319 START TEST dd_rw_offset 00:06:30.319 ************************************ 00:06:30.319 08:17:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:06:30.319 08:17:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:30.319 08:17:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:30.319 08:17:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:30.319 08:17:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:30.319 08:17:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:30.319 08:17:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=2zaek3frbu3u17ebftv3od0cjk752vx3gi6o1070uyrzqgwi9ss0s1q3oeinv7a0atvl4cbzcwjv96ngkz7e0fzyg8azlkl8r4zvdodjxj7wh5ay397hvvnbw7b1gga35ys5c58b22gfyc8219g6f6kb0orfc5f5v3v0fhj6vu7ygxj0pw974mgbbusrgdislz9mqv9f5ntwlg6qojekl26fydrbcq2g8ako90eemcr7472ca6unaw2amb1xcdf71b0vekap2jtopwejkf5ds5463ftauzsmjugh3eudvbpnharnimj08137owjb927e3mu630i2axkfzyvm1ak0eyj7l18ymyumdmg1y6nfs3jpjygvavguu5bph985p1d1olffu6fen2dkye7916ny99eyegyw75b7s2xzruxva9yyl7w5144tp2vcg8tharnfajcvs2pdxwzcyt89cv630z07326zi8uz2y507mitkrabr5xwwib8uo70gf2373oyeag127quxmmrdb8y2qoltb547umzj1mqhkghvv7vh48jl3nweox4pt8fhxpn7e8cl8i51hguxdgjrpiu2bt1urc101mxo6h24nym5u2d6b9knzjia454at9k50jzduv4ydggv7c0b9q9131gcl2eudp4slccb5yj3rz07w9hvzwb3e2za77ghq0d7eymi2k8ukbm66hxv8ck0gyf73fl6cn108xiwkhk4phl4ryzoezksn3kldyxjmyhkxsfdm1huvt1vvvqy3qnglwe27x7v126kpiq00a8m40ulgtbbaygd4ev52d8avld9bz6trhno6ahxe9dsp8ysy31l9mstzjvhlth7hgbhn6vsrcd7mp04fu47y3u63jxnypu044prcq50xbcpw96i1r5jjgjy8pj83stay9c2zo769gar8nbosih7soft66sgrr2k38n99q0k75vy6kt6pi1pl2oaci5s3mfriatka9gp72fdznapsdw545nqjkeijycvolej2j595aegqf5qq7hkvg14302dt895ezm9es4yhhwx12r6vasf5xktj461ejt22bheyhi71vijd80dt6im4bfwz39h34ce84a2jnjna76eblkjdalrzerpjvwa9m5l3ldqmnebnfnkhwmu6c6uat4uvw5qsz5n6x9fu1mz6uippxp8l3w7etr9p3p4rrx8zlkilgux3lx305i4r0khoyvym1cer6n2po5gc7vkvi9mp1y1udq8kevsxhdapw2ype6g6atfb32qmmxwusrcmc4hsmlb9wrxpiurwya87s4cdyv60dxlth0wzwon4ob0dzk6zbhns2vhotcia76zi980r3ftwh409d4g7bjl4m0jii71mpbu7boqacctbx8r8j3yz0bqw163m5goo03rreutbtj0rr6qffrauvbclysgcfhjsgvp3smrrg872722mx7zck6hbsh4fyp73k6conpibbagxbzk2eo4i2vlb28yr1nkrpp305lil6qz936j152ovwaiyu7se7r93q4fv614a544b2i2pshplnvta0zr7s26rrjsgcoctx5mjd5afgulv4itj17hhd6ysmatz8kpou781rk3ounw6w5o0dz8zmjsygm9drhmpkiox2bpxsnawcvh0n4pyw15508w7gy05q8uing7c7y0clf5ho3aq6us3gxmd4ay0j8snc621c7byqbywmotqnabevz8i9wlihyp588wmevyob8u2rf0gwehvj3liz519csc5fesk2xdouiubk78j499wd4m1ufuv4r8tyab6njw9lhnso6stoe8p9wy13sb321y1wkcn8xvyq7f4aae7o59wfdx4vng2c05jftmczfbcxym4oprh6uia87nqpqc0sfpt6xagq5spn2gvfcitng30jz2wankv46lk1jmlzdo1g5mddkk2r61l3oyfgoq1hkvkmvf49r30k50w258ypu1vpsetsgei4lim9hsjw8o78qf5g0ffjp5bvbhxvkxrxpn5dotmmdlou1fumyawxolwubsby98g0mi21ugprgm3zyurp09nbcgdalrb0wy2hjeh6yoj5cuhay017yr6hvfpfyfyj2lqsbjguv4lu4xq65350phjz39gszji8p4or97m1eehkvkyo81vks0x5dxjjkwll8wq7hwqiyis9bkv98j6sifqc622d8w80jgccy6itg5q4kyr44vasgfuoqc8mdttho34zn8czo0gltoprfkcedva5mc724lxmfok8a5w757sfm50fk367fz4duafppvo70ffkb57shgwpw0zcfinu1sjqroz27564ne54xd55yl8dfdu1i5j0nkjp1nthzkubw9zpkvnvyoy3g14xvjhcd8bi4o2v570wcop2adm6y2xl3ozhde05tfzerg11z44xixrgm0tpccbl50olivc97h8a2251xdtuaia2ligf8vh8rff3buei0mx9h80oypli97veoo6razorxgro4dxg95zfgi1ohz21u9ngu4widbq0qoiqodr1opg4j5u4pbe7t3q2pa50ggkaky24ctaopjyevnb0zph9kaue6ycp1uyljuhokwaj74ri69k0oi28q7trmyp0hq1cco3be8dupmhn9qz10qh965to3wxw398fnyl736hxs916c06xp2cgltidar0361ycbwe4dw244mfee8y082wi85tsfnpmr8i97qzoytbayrtwg32ushp6glsi68q3xil1xfwbqtn3xomzgzctbp39uoc8w42slghn2zms77asnc19f4nqyixbbqxbte3lx725j8np9sejr30cn2cym0wadworkcg3aa02762eu3p23vujdv5snthzqvs2yqjdyqthylb9taqvc11t4rwks5q3pk8mi9j6g77rmmr56ke5jvyec29n6l9zxlemxfgfuzvo3rgafxeyss8fw89jg2p0h4a1kw8gsxx943j8sss3d5g5txdagixnehi9u0wow5vu1gbxgqg2a3cyisofgwd5dy9dtxkorn5x5n927ik6epe4n6xe7mkr45h22kp31umn4evac4r5bsgxu6boc9oabne7q27d0libskp4zm9bbfznf8r8vmswnhqqqpvugakfkxpicr985u1dea5y809ovf9ufu8u3cujhqoluolrrtzsj21wt788bx0ltgosm0nryydhnfcaouaix3p528n44wru0sgk4wwopn49ny01t6l3mxle7srgl2wq4usy3fbho3adosiqsi1luazj0183f6fcy1b0n2ux9q4ihesddl74k9yy1vamvhxdgahq1to5ome7bfn9omu5e0iyv4agsq7tomqs76gdowj74pczglyjce6xgkhi4okcuxhx1j4uxk4lu52p6k5ce52wxm3h4dvnde4fovbw3f5f5f546onpu85y9ynghlo461iyyub3deduo6ivl69v2mqgagunaw499lj0zqs3w67pc5b7eouvvx5l32bezzy2ktg3e51i91dkkajttaie4xr8w6v6g79sl1i2qi46fkrpf7eruulcwv6kcjf24f131sjqe4lm87lboxpnr1rsg4kc2wwqt21ya2a566eiukugp98naf8ylgarm6qaeyhswyvmr1nzuy0ys6n0hmym2hkjhj9o402iqd1300vd185iu2aawpx7n48qnhifhctxu8k3adzry9zoerj0x86m1mdhozm8n19tnqx91kx3z04ukvebalsohv3zdad4quf20pgl15zpfyt6x2019yfzba5emtjs60xzo6umuazwbwm4gxyz1x1uj00xens1vaq3aayfoyiw6h550lu0v3mwc9wbdjux6omczv2nh9cd28g0pqyi21os46z9cjzh161lfivr28gjlvo12f490pv98fe6xsiqusuzojtu8k5aul82hh0448i3vo3z2ur84mnursgad5g3znop625mi7bbc0yhfct8qnl2aqt8uqrpeooa5212a867355hp7f5azamxok06pb6i2xruc3sfvn47x1y9tjexnvxds3354per 00:06:30.319 08:17:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:30.319 08:17:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:30.319 08:17:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:30.319 08:17:31 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:30.319 [2024-10-15 08:17:31.916332] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:30.319 [2024-10-15 08:17:31.916441] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60264 ] 00:06:30.319 { 00:06:30.319 "subsystems": [ 00:06:30.319 { 00:06:30.319 "subsystem": "bdev", 00:06:30.319 "config": [ 00:06:30.319 { 00:06:30.319 "params": { 00:06:30.319 "trtype": "pcie", 00:06:30.319 "traddr": "0000:00:10.0", 00:06:30.319 "name": "Nvme0" 00:06:30.319 }, 00:06:30.319 "method": "bdev_nvme_attach_controller" 00:06:30.319 }, 00:06:30.319 { 00:06:30.319 "method": "bdev_wait_for_examine" 00:06:30.319 } 00:06:30.319 ] 00:06:30.319 } 00:06:30.319 ] 00:06:30.319 } 00:06:30.578 [2024-10-15 08:17:32.055214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.578 [2024-10-15 08:17:32.135097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.578 [2024-10-15 08:17:32.209137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.836  [2024-10-15T08:17:32.825Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:31.094 00:06:31.094 08:17:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:31.094 08:17:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:31.094 08:17:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:31.094 08:17:32 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:31.094 { 00:06:31.094 "subsystems": [ 00:06:31.094 { 00:06:31.094 "subsystem": "bdev", 00:06:31.094 "config": [ 00:06:31.094 { 00:06:31.094 "params": { 00:06:31.094 "trtype": "pcie", 00:06:31.094 "traddr": "0000:00:10.0", 00:06:31.094 "name": "Nvme0" 00:06:31.094 }, 00:06:31.094 "method": "bdev_nvme_attach_controller" 00:06:31.094 }, 00:06:31.094 { 00:06:31.094 "method": "bdev_wait_for_examine" 00:06:31.094 } 00:06:31.094 ] 00:06:31.094 } 00:06:31.094 ] 00:06:31.094 } 00:06:31.094 [2024-10-15 08:17:32.669070] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:31.094 [2024-10-15 08:17:32.669220] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60283 ] 00:06:31.094 [2024-10-15 08:17:32.808662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.352 [2024-10-15 08:17:32.891469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.352 [2024-10-15 08:17:32.968318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.610  [2024-10-15T08:17:33.600Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:31.869 00:06:31.869 08:17:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:31.869 ************************************ 00:06:31.869 END TEST dd_rw_offset 00:06:31.869 ************************************ 00:06:31.870 08:17:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 2zaek3frbu3u17ebftv3od0cjk752vx3gi6o1070uyrzqgwi9ss0s1q3oeinv7a0atvl4cbzcwjv96ngkz7e0fzyg8azlkl8r4zvdodjxj7wh5ay397hvvnbw7b1gga35ys5c58b22gfyc8219g6f6kb0orfc5f5v3v0fhj6vu7ygxj0pw974mgbbusrgdislz9mqv9f5ntwlg6qojekl26fydrbcq2g8ako90eemcr7472ca6unaw2amb1xcdf71b0vekap2jtopwejkf5ds5463ftauzsmjugh3eudvbpnharnimj08137owjb927e3mu630i2axkfzyvm1ak0eyj7l18ymyumdmg1y6nfs3jpjygvavguu5bph985p1d1olffu6fen2dkye7916ny99eyegyw75b7s2xzruxva9yyl7w5144tp2vcg8tharnfajcvs2pdxwzcyt89cv630z07326zi8uz2y507mitkrabr5xwwib8uo70gf2373oyeag127quxmmrdb8y2qoltb547umzj1mqhkghvv7vh48jl3nweox4pt8fhxpn7e8cl8i51hguxdgjrpiu2bt1urc101mxo6h24nym5u2d6b9knzjia454at9k50jzduv4ydggv7c0b9q9131gcl2eudp4slccb5yj3rz07w9hvzwb3e2za77ghq0d7eymi2k8ukbm66hxv8ck0gyf73fl6cn108xiwkhk4phl4ryzoezksn3kldyxjmyhkxsfdm1huvt1vvvqy3qnglwe27x7v126kpiq00a8m40ulgtbbaygd4ev52d8avld9bz6trhno6ahxe9dsp8ysy31l9mstzjvhlth7hgbhn6vsrcd7mp04fu47y3u63jxnypu044prcq50xbcpw96i1r5jjgjy8pj83stay9c2zo769gar8nbosih7soft66sgrr2k38n99q0k75vy6kt6pi1pl2oaci5s3mfriatka9gp72fdznapsdw545nqjkeijycvolej2j595aegqf5qq7hkvg14302dt895ezm9es4yhhwx12r6vasf5xktj461ejt22bheyhi71vijd80dt6im4bfwz39h34ce84a2jnjna76eblkjdalrzerpjvwa9m5l3ldqmnebnfnkhwmu6c6uat4uvw5qsz5n6x9fu1mz6uippxp8l3w7etr9p3p4rrx8zlkilgux3lx305i4r0khoyvym1cer6n2po5gc7vkvi9mp1y1udq8kevsxhdapw2ype6g6atfb32qmmxwusrcmc4hsmlb9wrxpiurwya87s4cdyv60dxlth0wzwon4ob0dzk6zbhns2vhotcia76zi980r3ftwh409d4g7bjl4m0jii71mpbu7boqacctbx8r8j3yz0bqw163m5goo03rreutbtj0rr6qffrauvbclysgcfhjsgvp3smrrg872722mx7zck6hbsh4fyp73k6conpibbagxbzk2eo4i2vlb28yr1nkrpp305lil6qz936j152ovwaiyu7se7r93q4fv614a544b2i2pshplnvta0zr7s26rrjsgcoctx5mjd5afgulv4itj17hhd6ysmatz8kpou781rk3ounw6w5o0dz8zmjsygm9drhmpkiox2bpxsnawcvh0n4pyw15508w7gy05q8uing7c7y0clf5ho3aq6us3gxmd4ay0j8snc621c7byqbywmotqnabevz8i9wlihyp588wmevyob8u2rf0gwehvj3liz519csc5fesk2xdouiubk78j499wd4m1ufuv4r8tyab6njw9lhnso6stoe8p9wy13sb321y1wkcn8xvyq7f4aae7o59wfdx4vng2c05jftmczfbcxym4oprh6uia87nqpqc0sfpt6xagq5spn2gvfcitng30jz2wankv46lk1jmlzdo1g5mddkk2r61l3oyfgoq1hkvkmvf49r30k50w258ypu1vpsetsgei4lim9hsjw8o78qf5g0ffjp5bvbhxvkxrxpn5dotmmdlou1fumyawxolwubsby98g0mi21ugprgm3zyurp09nbcgdalrb0wy2hjeh6yoj5cuhay017yr6hvfpfyfyj2lqsbjguv4lu4xq65350phjz39gszji8p4or97m1eehkvkyo81vks0x5dxjjkwll8wq7hwqiyis9bkv98j6sifqc622d8w80jgccy6itg5q4kyr44vasgfuoqc8mdttho34zn8czo0gltoprfkcedva5mc724lxmfok8a5w757sfm50fk367fz4duafppvo70ffkb57shgwpw0zcfinu1sjqroz27564ne54xd55yl8dfdu1i5j0nkjp1nthzkubw9zpkvnvyoy3g14xvjhcd8bi4o2v570wcop2adm6y2xl3ozhde05tfzerg11z44xixrgm0tpccbl50olivc97h8a2251xdtuaia2ligf8vh8rff3buei0mx9h80oypli97veoo6razorxgro4dxg95zfgi1ohz21u9ngu4widbq0qoiqodr1opg4j5u4pbe7t3q2pa50ggkaky24ctaopjyevnb0zph9kaue6ycp1uyljuhokwaj74ri69k0oi28q7trmyp0hq1cco3be8dupmhn9qz10qh965to3wxw398fnyl736hxs916c06xp2cgltidar0361ycbwe4dw244mfee8y082wi85tsfnpmr8i97qzoytbayrtwg32ushp6glsi68q3xil1xfwbqtn3xomzgzctbp39uoc8w42slghn2zms77asnc19f4nqyixbbqxbte3lx725j8np9sejr30cn2cym0wadworkcg3aa02762eu3p23vujdv5snthzqvs2yqjdyqthylb9taqvc11t4rwks5q3pk8mi9j6g77rmmr56ke5jvyec29n6l9zxlemxfgfuzvo3rgafxeyss8fw89jg2p0h4a1kw8gsxx943j8sss3d5g5txdagixnehi9u0wow5vu1gbxgqg2a3cyisofgwd5dy9dtxkorn5x5n927ik6epe4n6xe7mkr45h22kp31umn4evac4r5bsgxu6boc9oabne7q27d0libskp4zm9bbfznf8r8vmswnhqqqpvugakfkxpicr985u1dea5y809ovf9ufu8u3cujhqoluolrrtzsj21wt788bx0ltgosm0nryydhnfcaouaix3p528n44wru0sgk4wwopn49ny01t6l3mxle7srgl2wq4usy3fbho3adosiqsi1luazj0183f6fcy1b0n2ux9q4ihesddl74k9yy1vamvhxdgahq1to5ome7bfn9omu5e0iyv4agsq7tomqs76gdowj74pczglyjce6xgkhi4okcuxhx1j4uxk4lu52p6k5ce52wxm3h4dvnde4fovbw3f5f5f546onpu85y9ynghlo461iyyub3deduo6ivl69v2mqgagunaw499lj0zqs3w67pc5b7eouvvx5l32bezzy2ktg3e51i91dkkajttaie4xr8w6v6g79sl1i2qi46fkrpf7eruulcwv6kcjf24f131sjqe4lm87lboxpnr1rsg4kc2wwqt21ya2a566eiukugp98naf8ylgarm6qaeyhswyvmr1nzuy0ys6n0hmym2hkjhj9o402iqd1300vd185iu2aawpx7n48qnhifhctxu8k3adzry9zoerj0x86m1mdhozm8n19tnqx91kx3z04ukvebalsohv3zdad4quf20pgl15zpfyt6x2019yfzba5emtjs60xzo6umuazwbwm4gxyz1x1uj00xens1vaq3aayfoyiw6h550lu0v3mwc9wbdjux6omczv2nh9cd28g0pqyi21os46z9cjzh161lfivr28gjlvo12f490pv98fe6xsiqusuzojtu8k5aul82hh0448i3vo3z2ur84mnursgad5g3znop625mi7bbc0yhfct8qnl2aqt8uqrpeooa5212a867355hp7f5azamxok06pb6i2xruc3sfvn47x1y9tjexnvxds3354per == \2\z\a\e\k\3\f\r\b\u\3\u\1\7\e\b\f\t\v\3\o\d\0\c\j\k\7\5\2\v\x\3\g\i\6\o\1\0\7\0\u\y\r\z\q\g\w\i\9\s\s\0\s\1\q\3\o\e\i\n\v\7\a\0\a\t\v\l\4\c\b\z\c\w\j\v\9\6\n\g\k\z\7\e\0\f\z\y\g\8\a\z\l\k\l\8\r\4\z\v\d\o\d\j\x\j\7\w\h\5\a\y\3\9\7\h\v\v\n\b\w\7\b\1\g\g\a\3\5\y\s\5\c\5\8\b\2\2\g\f\y\c\8\2\1\9\g\6\f\6\k\b\0\o\r\f\c\5\f\5\v\3\v\0\f\h\j\6\v\u\7\y\g\x\j\0\p\w\9\7\4\m\g\b\b\u\s\r\g\d\i\s\l\z\9\m\q\v\9\f\5\n\t\w\l\g\6\q\o\j\e\k\l\2\6\f\y\d\r\b\c\q\2\g\8\a\k\o\9\0\e\e\m\c\r\7\4\7\2\c\a\6\u\n\a\w\2\a\m\b\1\x\c\d\f\7\1\b\0\v\e\k\a\p\2\j\t\o\p\w\e\j\k\f\5\d\s\5\4\6\3\f\t\a\u\z\s\m\j\u\g\h\3\e\u\d\v\b\p\n\h\a\r\n\i\m\j\0\8\1\3\7\o\w\j\b\9\2\7\e\3\m\u\6\3\0\i\2\a\x\k\f\z\y\v\m\1\a\k\0\e\y\j\7\l\1\8\y\m\y\u\m\d\m\g\1\y\6\n\f\s\3\j\p\j\y\g\v\a\v\g\u\u\5\b\p\h\9\8\5\p\1\d\1\o\l\f\f\u\6\f\e\n\2\d\k\y\e\7\9\1\6\n\y\9\9\e\y\e\g\y\w\7\5\b\7\s\2\x\z\r\u\x\v\a\9\y\y\l\7\w\5\1\4\4\t\p\2\v\c\g\8\t\h\a\r\n\f\a\j\c\v\s\2\p\d\x\w\z\c\y\t\8\9\c\v\6\3\0\z\0\7\3\2\6\z\i\8\u\z\2\y\5\0\7\m\i\t\k\r\a\b\r\5\x\w\w\i\b\8\u\o\7\0\g\f\2\3\7\3\o\y\e\a\g\1\2\7\q\u\x\m\m\r\d\b\8\y\2\q\o\l\t\b\5\4\7\u\m\z\j\1\m\q\h\k\g\h\v\v\7\v\h\4\8\j\l\3\n\w\e\o\x\4\p\t\8\f\h\x\p\n\7\e\8\c\l\8\i\5\1\h\g\u\x\d\g\j\r\p\i\u\2\b\t\1\u\r\c\1\0\1\m\x\o\6\h\2\4\n\y\m\5\u\2\d\6\b\9\k\n\z\j\i\a\4\5\4\a\t\9\k\5\0\j\z\d\u\v\4\y\d\g\g\v\7\c\0\b\9\q\9\1\3\1\g\c\l\2\e\u\d\p\4\s\l\c\c\b\5\y\j\3\r\z\0\7\w\9\h\v\z\w\b\3\e\2\z\a\7\7\g\h\q\0\d\7\e\y\m\i\2\k\8\u\k\b\m\6\6\h\x\v\8\c\k\0\g\y\f\7\3\f\l\6\c\n\1\0\8\x\i\w\k\h\k\4\p\h\l\4\r\y\z\o\e\z\k\s\n\3\k\l\d\y\x\j\m\y\h\k\x\s\f\d\m\1\h\u\v\t\1\v\v\v\q\y\3\q\n\g\l\w\e\2\7\x\7\v\1\2\6\k\p\i\q\0\0\a\8\m\4\0\u\l\g\t\b\b\a\y\g\d\4\e\v\5\2\d\8\a\v\l\d\9\b\z\6\t\r\h\n\o\6\a\h\x\e\9\d\s\p\8\y\s\y\3\1\l\9\m\s\t\z\j\v\h\l\t\h\7\h\g\b\h\n\6\v\s\r\c\d\7\m\p\0\4\f\u\4\7\y\3\u\6\3\j\x\n\y\p\u\0\4\4\p\r\c\q\5\0\x\b\c\p\w\9\6\i\1\r\5\j\j\g\j\y\8\p\j\8\3\s\t\a\y\9\c\2\z\o\7\6\9\g\a\r\8\n\b\o\s\i\h\7\s\o\f\t\6\6\s\g\r\r\2\k\3\8\n\9\9\q\0\k\7\5\v\y\6\k\t\6\p\i\1\p\l\2\o\a\c\i\5\s\3\m\f\r\i\a\t\k\a\9\g\p\7\2\f\d\z\n\a\p\s\d\w\5\4\5\n\q\j\k\e\i\j\y\c\v\o\l\e\j\2\j\5\9\5\a\e\g\q\f\5\q\q\7\h\k\v\g\1\4\3\0\2\d\t\8\9\5\e\z\m\9\e\s\4\y\h\h\w\x\1\2\r\6\v\a\s\f\5\x\k\t\j\4\6\1\e\j\t\2\2\b\h\e\y\h\i\7\1\v\i\j\d\8\0\d\t\6\i\m\4\b\f\w\z\3\9\h\3\4\c\e\8\4\a\2\j\n\j\n\a\7\6\e\b\l\k\j\d\a\l\r\z\e\r\p\j\v\w\a\9\m\5\l\3\l\d\q\m\n\e\b\n\f\n\k\h\w\m\u\6\c\6\u\a\t\4\u\v\w\5\q\s\z\5\n\6\x\9\f\u\1\m\z\6\u\i\p\p\x\p\8\l\3\w\7\e\t\r\9\p\3\p\4\r\r\x\8\z\l\k\i\l\g\u\x\3\l\x\3\0\5\i\4\r\0\k\h\o\y\v\y\m\1\c\e\r\6\n\2\p\o\5\g\c\7\v\k\v\i\9\m\p\1\y\1\u\d\q\8\k\e\v\s\x\h\d\a\p\w\2\y\p\e\6\g\6\a\t\f\b\3\2\q\m\m\x\w\u\s\r\c\m\c\4\h\s\m\l\b\9\w\r\x\p\i\u\r\w\y\a\8\7\s\4\c\d\y\v\6\0\d\x\l\t\h\0\w\z\w\o\n\4\o\b\0\d\z\k\6\z\b\h\n\s\2\v\h\o\t\c\i\a\7\6\z\i\9\8\0\r\3\f\t\w\h\4\0\9\d\4\g\7\b\j\l\4\m\0\j\i\i\7\1\m\p\b\u\7\b\o\q\a\c\c\t\b\x\8\r\8\j\3\y\z\0\b\q\w\1\6\3\m\5\g\o\o\0\3\r\r\e\u\t\b\t\j\0\r\r\6\q\f\f\r\a\u\v\b\c\l\y\s\g\c\f\h\j\s\g\v\p\3\s\m\r\r\g\8\7\2\7\2\2\m\x\7\z\c\k\6\h\b\s\h\4\f\y\p\7\3\k\6\c\o\n\p\i\b\b\a\g\x\b\z\k\2\e\o\4\i\2\v\l\b\2\8\y\r\1\n\k\r\p\p\3\0\5\l\i\l\6\q\z\9\3\6\j\1\5\2\o\v\w\a\i\y\u\7\s\e\7\r\9\3\q\4\f\v\6\1\4\a\5\4\4\b\2\i\2\p\s\h\p\l\n\v\t\a\0\z\r\7\s\2\6\r\r\j\s\g\c\o\c\t\x\5\m\j\d\5\a\f\g\u\l\v\4\i\t\j\1\7\h\h\d\6\y\s\m\a\t\z\8\k\p\o\u\7\8\1\r\k\3\o\u\n\w\6\w\5\o\0\d\z\8\z\m\j\s\y\g\m\9\d\r\h\m\p\k\i\o\x\2\b\p\x\s\n\a\w\c\v\h\0\n\4\p\y\w\1\5\5\0\8\w\7\g\y\0\5\q\8\u\i\n\g\7\c\7\y\0\c\l\f\5\h\o\3\a\q\6\u\s\3\g\x\m\d\4\a\y\0\j\8\s\n\c\6\2\1\c\7\b\y\q\b\y\w\m\o\t\q\n\a\b\e\v\z\8\i\9\w\l\i\h\y\p\5\8\8\w\m\e\v\y\o\b\8\u\2\r\f\0\g\w\e\h\v\j\3\l\i\z\5\1\9\c\s\c\5\f\e\s\k\2\x\d\o\u\i\u\b\k\7\8\j\4\9\9\w\d\4\m\1\u\f\u\v\4\r\8\t\y\a\b\6\n\j\w\9\l\h\n\s\o\6\s\t\o\e\8\p\9\w\y\1\3\s\b\3\2\1\y\1\w\k\c\n\8\x\v\y\q\7\f\4\a\a\e\7\o\5\9\w\f\d\x\4\v\n\g\2\c\0\5\j\f\t\m\c\z\f\b\c\x\y\m\4\o\p\r\h\6\u\i\a\8\7\n\q\p\q\c\0\s\f\p\t\6\x\a\g\q\5\s\p\n\2\g\v\f\c\i\t\n\g\3\0\j\z\2\w\a\n\k\v\4\6\l\k\1\j\m\l\z\d\o\1\g\5\m\d\d\k\k\2\r\6\1\l\3\o\y\f\g\o\q\1\h\k\v\k\m\v\f\4\9\r\3\0\k\5\0\w\2\5\8\y\p\u\1\v\p\s\e\t\s\g\e\i\4\l\i\m\9\h\s\j\w\8\o\7\8\q\f\5\g\0\f\f\j\p\5\b\v\b\h\x\v\k\x\r\x\p\n\5\d\o\t\m\m\d\l\o\u\1\f\u\m\y\a\w\x\o\l\w\u\b\s\b\y\9\8\g\0\m\i\2\1\u\g\p\r\g\m\3\z\y\u\r\p\0\9\n\b\c\g\d\a\l\r\b\0\w\y\2\h\j\e\h\6\y\o\j\5\c\u\h\a\y\0\1\7\y\r\6\h\v\f\p\f\y\f\y\j\2\l\q\s\b\j\g\u\v\4\l\u\4\x\q\6\5\3\5\0\p\h\j\z\3\9\g\s\z\j\i\8\p\4\o\r\9\7\m\1\e\e\h\k\v\k\y\o\8\1\v\k\s\0\x\5\d\x\j\j\k\w\l\l\8\w\q\7\h\w\q\i\y\i\s\9\b\k\v\9\8\j\6\s\i\f\q\c\6\2\2\d\8\w\8\0\j\g\c\c\y\6\i\t\g\5\q\4\k\y\r\4\4\v\a\s\g\f\u\o\q\c\8\m\d\t\t\h\o\3\4\z\n\8\c\z\o\0\g\l\t\o\p\r\f\k\c\e\d\v\a\5\m\c\7\2\4\l\x\m\f\o\k\8\a\5\w\7\5\7\s\f\m\5\0\f\k\3\6\7\f\z\4\d\u\a\f\p\p\v\o\7\0\f\f\k\b\5\7\s\h\g\w\p\w\0\z\c\f\i\n\u\1\s\j\q\r\o\z\2\7\5\6\4\n\e\5\4\x\d\5\5\y\l\8\d\f\d\u\1\i\5\j\0\n\k\j\p\1\n\t\h\z\k\u\b\w\9\z\p\k\v\n\v\y\o\y\3\g\1\4\x\v\j\h\c\d\8\b\i\4\o\2\v\5\7\0\w\c\o\p\2\a\d\m\6\y\2\x\l\3\o\z\h\d\e\0\5\t\f\z\e\r\g\1\1\z\4\4\x\i\x\r\g\m\0\t\p\c\c\b\l\5\0\o\l\i\v\c\9\7\h\8\a\2\2\5\1\x\d\t\u\a\i\a\2\l\i\g\f\8\v\h\8\r\f\f\3\b\u\e\i\0\m\x\9\h\8\0\o\y\p\l\i\9\7\v\e\o\o\6\r\a\z\o\r\x\g\r\o\4\d\x\g\9\5\z\f\g\i\1\o\h\z\2\1\u\9\n\g\u\4\w\i\d\b\q\0\q\o\i\q\o\d\r\1\o\p\g\4\j\5\u\4\p\b\e\7\t\3\q\2\p\a\5\0\g\g\k\a\k\y\2\4\c\t\a\o\p\j\y\e\v\n\b\0\z\p\h\9\k\a\u\e\6\y\c\p\1\u\y\l\j\u\h\o\k\w\a\j\7\4\r\i\6\9\k\0\o\i\2\8\q\7\t\r\m\y\p\0\h\q\1\c\c\o\3\b\e\8\d\u\p\m\h\n\9\q\z\1\0\q\h\9\6\5\t\o\3\w\x\w\3\9\8\f\n\y\l\7\3\6\h\x\s\9\1\6\c\0\6\x\p\2\c\g\l\t\i\d\a\r\0\3\6\1\y\c\b\w\e\4\d\w\2\4\4\m\f\e\e\8\y\0\8\2\w\i\8\5\t\s\f\n\p\m\r\8\i\9\7\q\z\o\y\t\b\a\y\r\t\w\g\3\2\u\s\h\p\6\g\l\s\i\6\8\q\3\x\i\l\1\x\f\w\b\q\t\n\3\x\o\m\z\g\z\c\t\b\p\3\9\u\o\c\8\w\4\2\s\l\g\h\n\2\z\m\s\7\7\a\s\n\c\1\9\f\4\n\q\y\i\x\b\b\q\x\b\t\e\3\l\x\7\2\5\j\8\n\p\9\s\e\j\r\3\0\c\n\2\c\y\m\0\w\a\d\w\o\r\k\c\g\3\a\a\0\2\7\6\2\e\u\3\p\2\3\v\u\j\d\v\5\s\n\t\h\z\q\v\s\2\y\q\j\d\y\q\t\h\y\l\b\9\t\a\q\v\c\1\1\t\4\r\w\k\s\5\q\3\p\k\8\m\i\9\j\6\g\7\7\r\m\m\r\5\6\k\e\5\j\v\y\e\c\2\9\n\6\l\9\z\x\l\e\m\x\f\g\f\u\z\v\o\3\r\g\a\f\x\e\y\s\s\8\f\w\8\9\j\g\2\p\0\h\4\a\1\k\w\8\g\s\x\x\9\4\3\j\8\s\s\s\3\d\5\g\5\t\x\d\a\g\i\x\n\e\h\i\9\u\0\w\o\w\5\v\u\1\g\b\x\g\q\g\2\a\3\c\y\i\s\o\f\g\w\d\5\d\y\9\d\t\x\k\o\r\n\5\x\5\n\9\2\7\i\k\6\e\p\e\4\n\6\x\e\7\m\k\r\4\5\h\2\2\k\p\3\1\u\m\n\4\e\v\a\c\4\r\5\b\s\g\x\u\6\b\o\c\9\o\a\b\n\e\7\q\2\7\d\0\l\i\b\s\k\p\4\z\m\9\b\b\f\z\n\f\8\r\8\v\m\s\w\n\h\q\q\q\p\v\u\g\a\k\f\k\x\p\i\c\r\9\8\5\u\1\d\e\a\5\y\8\0\9\o\v\f\9\u\f\u\8\u\3\c\u\j\h\q\o\l\u\o\l\r\r\t\z\s\j\2\1\w\t\7\8\8\b\x\0\l\t\g\o\s\m\0\n\r\y\y\d\h\n\f\c\a\o\u\a\i\x\3\p\5\2\8\n\4\4\w\r\u\0\s\g\k\4\w\w\o\p\n\4\9\n\y\0\1\t\6\l\3\m\x\l\e\7\s\r\g\l\2\w\q\4\u\s\y\3\f\b\h\o\3\a\d\o\s\i\q\s\i\1\l\u\a\z\j\0\1\8\3\f\6\f\c\y\1\b\0\n\2\u\x\9\q\4\i\h\e\s\d\d\l\7\4\k\9\y\y\1\v\a\m\v\h\x\d\g\a\h\q\1\t\o\5\o\m\e\7\b\f\n\9\o\m\u\5\e\0\i\y\v\4\a\g\s\q\7\t\o\m\q\s\7\6\g\d\o\w\j\7\4\p\c\z\g\l\y\j\c\e\6\x\g\k\h\i\4\o\k\c\u\x\h\x\1\j\4\u\x\k\4\l\u\5\2\p\6\k\5\c\e\5\2\w\x\m\3\h\4\d\v\n\d\e\4\f\o\v\b\w\3\f\5\f\5\f\5\4\6\o\n\p\u\8\5\y\9\y\n\g\h\l\o\4\6\1\i\y\y\u\b\3\d\e\d\u\o\6\i\v\l\6\9\v\2\m\q\g\a\g\u\n\a\w\4\9\9\l\j\0\z\q\s\3\w\6\7\p\c\5\b\7\e\o\u\v\v\x\5\l\3\2\b\e\z\z\y\2\k\t\g\3\e\5\1\i\9\1\d\k\k\a\j\t\t\a\i\e\4\x\r\8\w\6\v\6\g\7\9\s\l\1\i\2\q\i\4\6\f\k\r\p\f\7\e\r\u\u\l\c\w\v\6\k\c\j\f\2\4\f\1\3\1\s\j\q\e\4\l\m\8\7\l\b\o\x\p\n\r\1\r\s\g\4\k\c\2\w\w\q\t\2\1\y\a\2\a\5\6\6\e\i\u\k\u\g\p\9\8\n\a\f\8\y\l\g\a\r\m\6\q\a\e\y\h\s\w\y\v\m\r\1\n\z\u\y\0\y\s\6\n\0\h\m\y\m\2\h\k\j\h\j\9\o\4\0\2\i\q\d\1\3\0\0\v\d\1\8\5\i\u\2\a\a\w\p\x\7\n\4\8\q\n\h\i\f\h\c\t\x\u\8\k\3\a\d\z\r\y\9\z\o\e\r\j\0\x\8\6\m\1\m\d\h\o\z\m\8\n\1\9\t\n\q\x\9\1\k\x\3\z\0\4\u\k\v\e\b\a\l\s\o\h\v\3\z\d\a\d\4\q\u\f\2\0\p\g\l\1\5\z\p\f\y\t\6\x\2\0\1\9\y\f\z\b\a\5\e\m\t\j\s\6\0\x\z\o\6\u\m\u\a\z\w\b\w\m\4\g\x\y\z\1\x\1\u\j\0\0\x\e\n\s\1\v\a\q\3\a\a\y\f\o\y\i\w\6\h\5\5\0\l\u\0\v\3\m\w\c\9\w\b\d\j\u\x\6\o\m\c\z\v\2\n\h\9\c\d\2\8\g\0\p\q\y\i\2\1\o\s\4\6\z\9\c\j\z\h\1\6\1\l\f\i\v\r\2\8\g\j\l\v\o\1\2\f\4\9\0\p\v\9\8\f\e\6\x\s\i\q\u\s\u\z\o\j\t\u\8\k\5\a\u\l\8\2\h\h\0\4\4\8\i\3\v\o\3\z\2\u\r\8\4\m\n\u\r\s\g\a\d\5\g\3\z\n\o\p\6\2\5\m\i\7\b\b\c\0\y\h\f\c\t\8\q\n\l\2\a\q\t\8\u\q\r\p\e\o\o\a\5\2\1\2\a\8\6\7\3\5\5\h\p\7\f\5\a\z\a\m\x\o\k\0\6\p\b\6\i\2\x\r\u\c\3\s\f\v\n\4\7\x\1\y\9\t\j\e\x\n\v\x\d\s\3\3\5\4\p\e\r ]] 00:06:31.870 00:06:31.870 real 0m1.593s 00:06:31.870 user 0m1.095s 00:06:31.870 sys 0m0.784s 00:06:31.870 08:17:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.870 08:17:33 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:31.870 08:17:33 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:31.870 08:17:33 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:31.870 08:17:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:31.870 08:17:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:31.870 08:17:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:31.870 08:17:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:31.870 08:17:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:31.870 08:17:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:31.870 08:17:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:31.870 08:17:33 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:31.870 08:17:33 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:31.870 [2024-10-15 08:17:33.491983] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:31.870 [2024-10-15 08:17:33.492093] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60313 ] 00:06:31.870 { 00:06:31.870 "subsystems": [ 00:06:31.870 { 00:06:31.870 "subsystem": "bdev", 00:06:31.870 "config": [ 00:06:31.870 { 00:06:31.870 "params": { 00:06:31.870 "trtype": "pcie", 00:06:31.870 "traddr": "0000:00:10.0", 00:06:31.870 "name": "Nvme0" 00:06:31.870 }, 00:06:31.870 "method": "bdev_nvme_attach_controller" 00:06:31.870 }, 00:06:31.870 { 00:06:31.870 "method": "bdev_wait_for_examine" 00:06:31.870 } 00:06:31.870 ] 00:06:31.870 } 00:06:31.870 ] 00:06:31.870 } 00:06:32.128 [2024-10-15 08:17:33.622890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.128 [2024-10-15 08:17:33.716959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.128 [2024-10-15 08:17:33.794870] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.387  [2024-10-15T08:17:34.377Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:32.646 00:06:32.646 08:17:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:32.646 00:06:32.646 real 0m20.295s 00:06:32.646 user 0m14.436s 00:06:32.646 sys 0m8.440s 00:06:32.646 08:17:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.646 ************************************ 00:06:32.646 END TEST spdk_dd_basic_rw 00:06:32.646 08:17:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:32.646 ************************************ 00:06:32.646 08:17:34 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:32.646 08:17:34 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.646 08:17:34 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.646 08:17:34 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:32.646 ************************************ 00:06:32.646 START TEST spdk_dd_posix 00:06:32.646 ************************************ 00:06:32.646 08:17:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:32.646 * Looking for test storage... 00:06:32.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:32.646 08:17:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:32.646 08:17:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lcov --version 00:06:32.646 08:17:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:32.904 08:17:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:32.904 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.904 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.904 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.904 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.904 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.904 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:32.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.905 --rc genhtml_branch_coverage=1 00:06:32.905 --rc genhtml_function_coverage=1 00:06:32.905 --rc genhtml_legend=1 00:06:32.905 --rc geninfo_all_blocks=1 00:06:32.905 --rc geninfo_unexecuted_blocks=1 00:06:32.905 00:06:32.905 ' 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:32.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.905 --rc genhtml_branch_coverage=1 00:06:32.905 --rc genhtml_function_coverage=1 00:06:32.905 --rc genhtml_legend=1 00:06:32.905 --rc geninfo_all_blocks=1 00:06:32.905 --rc geninfo_unexecuted_blocks=1 00:06:32.905 00:06:32.905 ' 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:32.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.905 --rc genhtml_branch_coverage=1 00:06:32.905 --rc genhtml_function_coverage=1 00:06:32.905 --rc genhtml_legend=1 00:06:32.905 --rc geninfo_all_blocks=1 00:06:32.905 --rc geninfo_unexecuted_blocks=1 00:06:32.905 00:06:32.905 ' 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:32.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.905 --rc genhtml_branch_coverage=1 00:06:32.905 --rc genhtml_function_coverage=1 00:06:32.905 --rc genhtml_legend=1 00:06:32.905 --rc geninfo_all_blocks=1 00:06:32.905 --rc geninfo_unexecuted_blocks=1 00:06:32.905 00:06:32.905 ' 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:32.905 * First test run, liburing in use 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:32.905 ************************************ 00:06:32.905 START TEST dd_flag_append 00:06:32.905 ************************************ 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=c2zmazbwtm1m8xu517t0xz022pegmw8y 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=v1a65mhu1o4valwwi7i1kylkwdz6msk9 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s c2zmazbwtm1m8xu517t0xz022pegmw8y 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s v1a65mhu1o4valwwi7i1kylkwdz6msk9 00:06:32.905 08:17:34 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:32.905 [2024-10-15 08:17:34.514687] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:32.905 [2024-10-15 08:17:34.515423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60385 ] 00:06:33.164 [2024-10-15 08:17:34.657100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.164 [2024-10-15 08:17:34.741559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.164 [2024-10-15 08:17:34.819680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.164  [2024-10-15T08:17:35.153Z] Copying: 32/32 [B] (average 31 kBps) 00:06:33.422 00:06:33.422 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ v1a65mhu1o4valwwi7i1kylkwdz6msk9c2zmazbwtm1m8xu517t0xz022pegmw8y == \v\1\a\6\5\m\h\u\1\o\4\v\a\l\w\w\i\7\i\1\k\y\l\k\w\d\z\6\m\s\k\9\c\2\z\m\a\z\b\w\t\m\1\m\8\x\u\5\1\7\t\0\x\z\0\2\2\p\e\g\m\w\8\y ]] 00:06:33.422 00:06:33.422 real 0m0.675s 00:06:33.422 user 0m0.382s 00:06:33.422 sys 0m0.363s 00:06:33.422 ************************************ 00:06:33.422 END TEST dd_flag_append 00:06:33.422 ************************************ 00:06:33.422 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.422 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:33.680 08:17:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:33.680 08:17:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.680 08:17:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.680 08:17:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:33.680 ************************************ 00:06:33.680 START TEST dd_flag_directory 00:06:33.680 ************************************ 00:06:33.680 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:06:33.680 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:33.680 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:33.680 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:33.680 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.680 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:33.680 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.680 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:33.680 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.680 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:33.680 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.680 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:33.680 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:33.680 [2024-10-15 08:17:35.234647] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:33.680 [2024-10-15 08:17:35.234771] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60419 ] 00:06:33.680 [2024-10-15 08:17:35.379540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.939 [2024-10-15 08:17:35.466559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.939 [2024-10-15 08:17:35.542985] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.939 [2024-10-15 08:17:35.592561] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:33.939 [2024-10-15 08:17:35.592647] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:33.939 [2024-10-15 08:17:35.592678] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.197 [2024-10-15 08:17:35.758885] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:34.197 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:34.197 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:34.197 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:34.197 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:34.197 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:34.197 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:34.197 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:34.197 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:34.197 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:34.197 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.197 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.197 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.197 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.197 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.197 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.197 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.197 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:34.197 08:17:35 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:34.197 [2024-10-15 08:17:35.912196] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:34.197 [2024-10-15 08:17:35.912344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60428 ] 00:06:34.456 [2024-10-15 08:17:36.051235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.456 [2024-10-15 08:17:36.129595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.715 [2024-10-15 08:17:36.201815] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.715 [2024-10-15 08:17:36.251717] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:34.715 [2024-10-15 08:17:36.251800] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:34.715 [2024-10-15 08:17:36.251832] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.715 [2024-10-15 08:17:36.417505] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:34.976 00:06:34.976 real 0m1.328s 00:06:34.976 user 0m0.764s 00:06:34.976 sys 0m0.352s 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:34.976 ************************************ 00:06:34.976 END TEST dd_flag_directory 00:06:34.976 ************************************ 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:34.976 ************************************ 00:06:34.976 START TEST dd_flag_nofollow 00:06:34.976 ************************************ 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:34.976 08:17:36 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.976 [2024-10-15 08:17:36.629917] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:34.976 [2024-10-15 08:17:36.630071] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60457 ] 00:06:35.241 [2024-10-15 08:17:36.766786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.242 [2024-10-15 08:17:36.851206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.242 [2024-10-15 08:17:36.922941] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.242 [2024-10-15 08:17:36.970938] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:35.242 [2024-10-15 08:17:36.971011] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:35.242 [2024-10-15 08:17:36.971028] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.500 [2024-10-15 08:17:37.129880] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:35.500 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:35.500 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.500 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:35.500 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:35.500 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:35.500 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.500 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:35.500 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:35.500 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:35.500 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.500 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.500 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.500 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.500 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.500 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.500 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:35.500 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:35.500 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:35.758 [2024-10-15 08:17:37.263788] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:35.758 [2024-10-15 08:17:37.263892] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60472 ] 00:06:35.758 [2024-10-15 08:17:37.396422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.758 [2024-10-15 08:17:37.475462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.017 [2024-10-15 08:17:37.546687] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.017 [2024-10-15 08:17:37.592831] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:36.017 [2024-10-15 08:17:37.592899] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:36.017 [2024-10-15 08:17:37.592916] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:36.275 [2024-10-15 08:17:37.751366] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:36.275 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:36.275 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:36.275 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:36.275 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:36.275 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:36.275 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:36.275 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:36.275 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:36.275 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:36.275 08:17:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:36.275 [2024-10-15 08:17:37.905372] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:36.275 [2024-10-15 08:17:37.905494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60478 ] 00:06:36.535 [2024-10-15 08:17:38.042927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.535 [2024-10-15 08:17:38.119689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.535 [2024-10-15 08:17:38.190591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.535  [2024-10-15T08:17:38.525Z] Copying: 512/512 [B] (average 500 kBps) 00:06:36.794 00:06:36.794 08:17:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 03ei2rdss6plbiexwxcvgew6o9aqcmqp7fctaaqfsk3jn9o9v9tt74nelrdokta6ebu0wx4gigvucihltx16y7mt81d4t05613hibagy8g6n69qqn7llb4u1xb1jxd1zaq61wwowwayl2wt6k35793aisfih4wv12lgodlepswpfdgqbmxhj9805vc76w790ak13p6fj0t48ykjq7rcdvvqbchjv6s4lerxazr1uh6072x4yjj5tik8okofvi6m3rwb73hxurkobfivmwq70pbwiadjsn12z9zuj0mhtij20gefqjrjq59yuviwl1tcvm7p5e8z6nu3zk6h1u31fg1bkvqnw2luryyicmw1sp8lel2tijymimtmlzjdtqrpzuke48b1xbp5dnji3uhnymfk25auyazdolpcb6w676gv3qbjj4ck2z3mkbpmi51wox22kal7pdfj51zyls3m5hirln3ej6jwej2m330195vllzuc30qrbza76ukmguf37 == \0\3\e\i\2\r\d\s\s\6\p\l\b\i\e\x\w\x\c\v\g\e\w\6\o\9\a\q\c\m\q\p\7\f\c\t\a\a\q\f\s\k\3\j\n\9\o\9\v\9\t\t\7\4\n\e\l\r\d\o\k\t\a\6\e\b\u\0\w\x\4\g\i\g\v\u\c\i\h\l\t\x\1\6\y\7\m\t\8\1\d\4\t\0\5\6\1\3\h\i\b\a\g\y\8\g\6\n\6\9\q\q\n\7\l\l\b\4\u\1\x\b\1\j\x\d\1\z\a\q\6\1\w\w\o\w\w\a\y\l\2\w\t\6\k\3\5\7\9\3\a\i\s\f\i\h\4\w\v\1\2\l\g\o\d\l\e\p\s\w\p\f\d\g\q\b\m\x\h\j\9\8\0\5\v\c\7\6\w\7\9\0\a\k\1\3\p\6\f\j\0\t\4\8\y\k\j\q\7\r\c\d\v\v\q\b\c\h\j\v\6\s\4\l\e\r\x\a\z\r\1\u\h\6\0\7\2\x\4\y\j\j\5\t\i\k\8\o\k\o\f\v\i\6\m\3\r\w\b\7\3\h\x\u\r\k\o\b\f\i\v\m\w\q\7\0\p\b\w\i\a\d\j\s\n\1\2\z\9\z\u\j\0\m\h\t\i\j\2\0\g\e\f\q\j\r\j\q\5\9\y\u\v\i\w\l\1\t\c\v\m\7\p\5\e\8\z\6\n\u\3\z\k\6\h\1\u\3\1\f\g\1\b\k\v\q\n\w\2\l\u\r\y\y\i\c\m\w\1\s\p\8\l\e\l\2\t\i\j\y\m\i\m\t\m\l\z\j\d\t\q\r\p\z\u\k\e\4\8\b\1\x\b\p\5\d\n\j\i\3\u\h\n\y\m\f\k\2\5\a\u\y\a\z\d\o\l\p\c\b\6\w\6\7\6\g\v\3\q\b\j\j\4\c\k\2\z\3\m\k\b\p\m\i\5\1\w\o\x\2\2\k\a\l\7\p\d\f\j\5\1\z\y\l\s\3\m\5\h\i\r\l\n\3\e\j\6\j\w\e\j\2\m\3\3\0\1\9\5\v\l\l\z\u\c\3\0\q\r\b\z\a\7\6\u\k\m\g\u\f\3\7 ]] 00:06:36.794 00:06:36.794 real 0m1.934s 00:06:36.794 user 0m1.095s 00:06:36.794 sys 0m0.694s 00:06:36.794 08:17:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.794 08:17:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:36.794 ************************************ 00:06:36.794 END TEST dd_flag_nofollow 00:06:36.794 ************************************ 00:06:37.052 08:17:38 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:37.052 08:17:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.052 08:17:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.052 08:17:38 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:37.052 ************************************ 00:06:37.052 START TEST dd_flag_noatime 00:06:37.052 ************************************ 00:06:37.052 08:17:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:06:37.052 08:17:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:37.052 08:17:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:37.052 08:17:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:37.052 08:17:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:37.052 08:17:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:37.052 08:17:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:37.052 08:17:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1728980258 00:06:37.052 08:17:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.052 08:17:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1728980258 00:06:37.052 08:17:38 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:37.988 08:17:39 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.988 [2024-10-15 08:17:39.622148] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:37.989 [2024-10-15 08:17:39.622280] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60522 ] 00:06:38.247 [2024-10-15 08:17:39.762088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.247 [2024-10-15 08:17:39.858939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.247 [2024-10-15 08:17:39.939385] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.505  [2024-10-15T08:17:40.495Z] Copying: 512/512 [B] (average 500 kBps) 00:06:38.764 00:06:38.764 08:17:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:38.764 08:17:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1728980258 )) 00:06:38.764 08:17:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.764 08:17:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1728980258 )) 00:06:38.764 08:17:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.764 [2024-10-15 08:17:40.325694] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:38.764 [2024-10-15 08:17:40.325804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60535 ] 00:06:38.764 [2024-10-15 08:17:40.464052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.022 [2024-10-15 08:17:40.549196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.022 [2024-10-15 08:17:40.628441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.022  [2024-10-15T08:17:41.012Z] Copying: 512/512 [B] (average 500 kBps) 00:06:39.281 00:06:39.281 08:17:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:39.281 08:17:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1728980260 )) 00:06:39.281 00:06:39.281 real 0m2.399s 00:06:39.281 user 0m0.776s 00:06:39.281 sys 0m0.767s 00:06:39.281 08:17:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.281 08:17:40 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:39.281 ************************************ 00:06:39.281 END TEST dd_flag_noatime 00:06:39.281 ************************************ 00:06:39.281 08:17:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:39.281 08:17:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.281 08:17:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.281 08:17:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:39.281 ************************************ 00:06:39.281 START TEST dd_flags_misc 00:06:39.281 ************************************ 00:06:39.281 08:17:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:06:39.281 08:17:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:39.281 08:17:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:39.281 08:17:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:39.281 08:17:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:39.281 08:17:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:39.281 08:17:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:39.281 08:17:40 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:39.281 08:17:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:39.281 08:17:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:39.540 [2024-10-15 08:17:41.067048] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:39.540 [2024-10-15 08:17:41.067217] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60568 ] 00:06:39.540 [2024-10-15 08:17:41.207621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.799 [2024-10-15 08:17:41.283532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.799 [2024-10-15 08:17:41.357327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.799  [2024-10-15T08:17:41.789Z] Copying: 512/512 [B] (average 500 kBps) 00:06:40.058 00:06:40.058 08:17:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ t9e1baba0nn4zdz5n5y7wkz7fi8jkbpq83musha8a2r564mm9mrmqvw32q86zzclgivo4pitlyifk06kmx12pqw8621ntrw5ky08pl5owo2yxcoo287bbd3vccjj3i602kjql3b0vx66zy5b72v9cw96b3ebfpc5wbacm1680ihkxhoe59ngrsji72cunzkbznodef9xbgh0rkvdgp7x8grpjp2iad75htpykq143kiy9ma7ynkl8b8230kx9zdo4jv4qy9u9uvtk0ar9r0t48a1qk2zgjnzvgx85kpn20lnrtzql61joi3bbvz081z12ycsb09v1443pdj1rw3yn5qijqvtfaktsd1lkdtsvrmgxfu4fgbu78p5q2j7txgkjbowhhv8vi054ufuwwozvpmdymm7ngvkkbnau5p5yu6y4mjo34narzglkb32nraqiyyxntkq0ljm6qw30x2fqzgukv75ca9wk3aje20wrnt4hytrfqh25ncmshiepuia == \t\9\e\1\b\a\b\a\0\n\n\4\z\d\z\5\n\5\y\7\w\k\z\7\f\i\8\j\k\b\p\q\8\3\m\u\s\h\a\8\a\2\r\5\6\4\m\m\9\m\r\m\q\v\w\3\2\q\8\6\z\z\c\l\g\i\v\o\4\p\i\t\l\y\i\f\k\0\6\k\m\x\1\2\p\q\w\8\6\2\1\n\t\r\w\5\k\y\0\8\p\l\5\o\w\o\2\y\x\c\o\o\2\8\7\b\b\d\3\v\c\c\j\j\3\i\6\0\2\k\j\q\l\3\b\0\v\x\6\6\z\y\5\b\7\2\v\9\c\w\9\6\b\3\e\b\f\p\c\5\w\b\a\c\m\1\6\8\0\i\h\k\x\h\o\e\5\9\n\g\r\s\j\i\7\2\c\u\n\z\k\b\z\n\o\d\e\f\9\x\b\g\h\0\r\k\v\d\g\p\7\x\8\g\r\p\j\p\2\i\a\d\7\5\h\t\p\y\k\q\1\4\3\k\i\y\9\m\a\7\y\n\k\l\8\b\8\2\3\0\k\x\9\z\d\o\4\j\v\4\q\y\9\u\9\u\v\t\k\0\a\r\9\r\0\t\4\8\a\1\q\k\2\z\g\j\n\z\v\g\x\8\5\k\p\n\2\0\l\n\r\t\z\q\l\6\1\j\o\i\3\b\b\v\z\0\8\1\z\1\2\y\c\s\b\0\9\v\1\4\4\3\p\d\j\1\r\w\3\y\n\5\q\i\j\q\v\t\f\a\k\t\s\d\1\l\k\d\t\s\v\r\m\g\x\f\u\4\f\g\b\u\7\8\p\5\q\2\j\7\t\x\g\k\j\b\o\w\h\h\v\8\v\i\0\5\4\u\f\u\w\w\o\z\v\p\m\d\y\m\m\7\n\g\v\k\k\b\n\a\u\5\p\5\y\u\6\y\4\m\j\o\3\4\n\a\r\z\g\l\k\b\3\2\n\r\a\q\i\y\y\x\n\t\k\q\0\l\j\m\6\q\w\3\0\x\2\f\q\z\g\u\k\v\7\5\c\a\9\w\k\3\a\j\e\2\0\w\r\n\t\4\h\y\t\r\f\q\h\2\5\n\c\m\s\h\i\e\p\u\i\a ]] 00:06:40.058 08:17:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:40.058 08:17:41 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:40.058 [2024-10-15 08:17:41.719639] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:40.058 [2024-10-15 08:17:41.719767] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60579 ] 00:06:40.317 [2024-10-15 08:17:41.854676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.317 [2024-10-15 08:17:41.936951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.317 [2024-10-15 08:17:42.013890] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.576  [2024-10-15T08:17:42.565Z] Copying: 512/512 [B] (average 500 kBps) 00:06:40.834 00:06:40.835 08:17:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ t9e1baba0nn4zdz5n5y7wkz7fi8jkbpq83musha8a2r564mm9mrmqvw32q86zzclgivo4pitlyifk06kmx12pqw8621ntrw5ky08pl5owo2yxcoo287bbd3vccjj3i602kjql3b0vx66zy5b72v9cw96b3ebfpc5wbacm1680ihkxhoe59ngrsji72cunzkbznodef9xbgh0rkvdgp7x8grpjp2iad75htpykq143kiy9ma7ynkl8b8230kx9zdo4jv4qy9u9uvtk0ar9r0t48a1qk2zgjnzvgx85kpn20lnrtzql61joi3bbvz081z12ycsb09v1443pdj1rw3yn5qijqvtfaktsd1lkdtsvrmgxfu4fgbu78p5q2j7txgkjbowhhv8vi054ufuwwozvpmdymm7ngvkkbnau5p5yu6y4mjo34narzglkb32nraqiyyxntkq0ljm6qw30x2fqzgukv75ca9wk3aje20wrnt4hytrfqh25ncmshiepuia == \t\9\e\1\b\a\b\a\0\n\n\4\z\d\z\5\n\5\y\7\w\k\z\7\f\i\8\j\k\b\p\q\8\3\m\u\s\h\a\8\a\2\r\5\6\4\m\m\9\m\r\m\q\v\w\3\2\q\8\6\z\z\c\l\g\i\v\o\4\p\i\t\l\y\i\f\k\0\6\k\m\x\1\2\p\q\w\8\6\2\1\n\t\r\w\5\k\y\0\8\p\l\5\o\w\o\2\y\x\c\o\o\2\8\7\b\b\d\3\v\c\c\j\j\3\i\6\0\2\k\j\q\l\3\b\0\v\x\6\6\z\y\5\b\7\2\v\9\c\w\9\6\b\3\e\b\f\p\c\5\w\b\a\c\m\1\6\8\0\i\h\k\x\h\o\e\5\9\n\g\r\s\j\i\7\2\c\u\n\z\k\b\z\n\o\d\e\f\9\x\b\g\h\0\r\k\v\d\g\p\7\x\8\g\r\p\j\p\2\i\a\d\7\5\h\t\p\y\k\q\1\4\3\k\i\y\9\m\a\7\y\n\k\l\8\b\8\2\3\0\k\x\9\z\d\o\4\j\v\4\q\y\9\u\9\u\v\t\k\0\a\r\9\r\0\t\4\8\a\1\q\k\2\z\g\j\n\z\v\g\x\8\5\k\p\n\2\0\l\n\r\t\z\q\l\6\1\j\o\i\3\b\b\v\z\0\8\1\z\1\2\y\c\s\b\0\9\v\1\4\4\3\p\d\j\1\r\w\3\y\n\5\q\i\j\q\v\t\f\a\k\t\s\d\1\l\k\d\t\s\v\r\m\g\x\f\u\4\f\g\b\u\7\8\p\5\q\2\j\7\t\x\g\k\j\b\o\w\h\h\v\8\v\i\0\5\4\u\f\u\w\w\o\z\v\p\m\d\y\m\m\7\n\g\v\k\k\b\n\a\u\5\p\5\y\u\6\y\4\m\j\o\3\4\n\a\r\z\g\l\k\b\3\2\n\r\a\q\i\y\y\x\n\t\k\q\0\l\j\m\6\q\w\3\0\x\2\f\q\z\g\u\k\v\7\5\c\a\9\w\k\3\a\j\e\2\0\w\r\n\t\4\h\y\t\r\f\q\h\2\5\n\c\m\s\h\i\e\p\u\i\a ]] 00:06:40.835 08:17:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:40.835 08:17:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:40.835 [2024-10-15 08:17:42.380153] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:40.835 [2024-10-15 08:17:42.380298] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60593 ] 00:06:40.835 [2024-10-15 08:17:42.519431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.093 [2024-10-15 08:17:42.603846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.093 [2024-10-15 08:17:42.678486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.093  [2024-10-15T08:17:43.082Z] Copying: 512/512 [B] (average 166 kBps) 00:06:41.351 00:06:41.351 08:17:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ t9e1baba0nn4zdz5n5y7wkz7fi8jkbpq83musha8a2r564mm9mrmqvw32q86zzclgivo4pitlyifk06kmx12pqw8621ntrw5ky08pl5owo2yxcoo287bbd3vccjj3i602kjql3b0vx66zy5b72v9cw96b3ebfpc5wbacm1680ihkxhoe59ngrsji72cunzkbznodef9xbgh0rkvdgp7x8grpjp2iad75htpykq143kiy9ma7ynkl8b8230kx9zdo4jv4qy9u9uvtk0ar9r0t48a1qk2zgjnzvgx85kpn20lnrtzql61joi3bbvz081z12ycsb09v1443pdj1rw3yn5qijqvtfaktsd1lkdtsvrmgxfu4fgbu78p5q2j7txgkjbowhhv8vi054ufuwwozvpmdymm7ngvkkbnau5p5yu6y4mjo34narzglkb32nraqiyyxntkq0ljm6qw30x2fqzgukv75ca9wk3aje20wrnt4hytrfqh25ncmshiepuia == \t\9\e\1\b\a\b\a\0\n\n\4\z\d\z\5\n\5\y\7\w\k\z\7\f\i\8\j\k\b\p\q\8\3\m\u\s\h\a\8\a\2\r\5\6\4\m\m\9\m\r\m\q\v\w\3\2\q\8\6\z\z\c\l\g\i\v\o\4\p\i\t\l\y\i\f\k\0\6\k\m\x\1\2\p\q\w\8\6\2\1\n\t\r\w\5\k\y\0\8\p\l\5\o\w\o\2\y\x\c\o\o\2\8\7\b\b\d\3\v\c\c\j\j\3\i\6\0\2\k\j\q\l\3\b\0\v\x\6\6\z\y\5\b\7\2\v\9\c\w\9\6\b\3\e\b\f\p\c\5\w\b\a\c\m\1\6\8\0\i\h\k\x\h\o\e\5\9\n\g\r\s\j\i\7\2\c\u\n\z\k\b\z\n\o\d\e\f\9\x\b\g\h\0\r\k\v\d\g\p\7\x\8\g\r\p\j\p\2\i\a\d\7\5\h\t\p\y\k\q\1\4\3\k\i\y\9\m\a\7\y\n\k\l\8\b\8\2\3\0\k\x\9\z\d\o\4\j\v\4\q\y\9\u\9\u\v\t\k\0\a\r\9\r\0\t\4\8\a\1\q\k\2\z\g\j\n\z\v\g\x\8\5\k\p\n\2\0\l\n\r\t\z\q\l\6\1\j\o\i\3\b\b\v\z\0\8\1\z\1\2\y\c\s\b\0\9\v\1\4\4\3\p\d\j\1\r\w\3\y\n\5\q\i\j\q\v\t\f\a\k\t\s\d\1\l\k\d\t\s\v\r\m\g\x\f\u\4\f\g\b\u\7\8\p\5\q\2\j\7\t\x\g\k\j\b\o\w\h\h\v\8\v\i\0\5\4\u\f\u\w\w\o\z\v\p\m\d\y\m\m\7\n\g\v\k\k\b\n\a\u\5\p\5\y\u\6\y\4\m\j\o\3\4\n\a\r\z\g\l\k\b\3\2\n\r\a\q\i\y\y\x\n\t\k\q\0\l\j\m\6\q\w\3\0\x\2\f\q\z\g\u\k\v\7\5\c\a\9\w\k\3\a\j\e\2\0\w\r\n\t\4\h\y\t\r\f\q\h\2\5\n\c\m\s\h\i\e\p\u\i\a ]] 00:06:41.351 08:17:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:41.351 08:17:42 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:41.351 [2024-10-15 08:17:43.047493] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:41.351 [2024-10-15 08:17:43.047621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60598 ] 00:06:41.610 [2024-10-15 08:17:43.184665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.610 [2024-10-15 08:17:43.277099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.868 [2024-10-15 08:17:43.363520] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.868  [2024-10-15T08:17:43.860Z] Copying: 512/512 [B] (average 250 kBps) 00:06:42.129 00:06:42.129 08:17:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ t9e1baba0nn4zdz5n5y7wkz7fi8jkbpq83musha8a2r564mm9mrmqvw32q86zzclgivo4pitlyifk06kmx12pqw8621ntrw5ky08pl5owo2yxcoo287bbd3vccjj3i602kjql3b0vx66zy5b72v9cw96b3ebfpc5wbacm1680ihkxhoe59ngrsji72cunzkbznodef9xbgh0rkvdgp7x8grpjp2iad75htpykq143kiy9ma7ynkl8b8230kx9zdo4jv4qy9u9uvtk0ar9r0t48a1qk2zgjnzvgx85kpn20lnrtzql61joi3bbvz081z12ycsb09v1443pdj1rw3yn5qijqvtfaktsd1lkdtsvrmgxfu4fgbu78p5q2j7txgkjbowhhv8vi054ufuwwozvpmdymm7ngvkkbnau5p5yu6y4mjo34narzglkb32nraqiyyxntkq0ljm6qw30x2fqzgukv75ca9wk3aje20wrnt4hytrfqh25ncmshiepuia == \t\9\e\1\b\a\b\a\0\n\n\4\z\d\z\5\n\5\y\7\w\k\z\7\f\i\8\j\k\b\p\q\8\3\m\u\s\h\a\8\a\2\r\5\6\4\m\m\9\m\r\m\q\v\w\3\2\q\8\6\z\z\c\l\g\i\v\o\4\p\i\t\l\y\i\f\k\0\6\k\m\x\1\2\p\q\w\8\6\2\1\n\t\r\w\5\k\y\0\8\p\l\5\o\w\o\2\y\x\c\o\o\2\8\7\b\b\d\3\v\c\c\j\j\3\i\6\0\2\k\j\q\l\3\b\0\v\x\6\6\z\y\5\b\7\2\v\9\c\w\9\6\b\3\e\b\f\p\c\5\w\b\a\c\m\1\6\8\0\i\h\k\x\h\o\e\5\9\n\g\r\s\j\i\7\2\c\u\n\z\k\b\z\n\o\d\e\f\9\x\b\g\h\0\r\k\v\d\g\p\7\x\8\g\r\p\j\p\2\i\a\d\7\5\h\t\p\y\k\q\1\4\3\k\i\y\9\m\a\7\y\n\k\l\8\b\8\2\3\0\k\x\9\z\d\o\4\j\v\4\q\y\9\u\9\u\v\t\k\0\a\r\9\r\0\t\4\8\a\1\q\k\2\z\g\j\n\z\v\g\x\8\5\k\p\n\2\0\l\n\r\t\z\q\l\6\1\j\o\i\3\b\b\v\z\0\8\1\z\1\2\y\c\s\b\0\9\v\1\4\4\3\p\d\j\1\r\w\3\y\n\5\q\i\j\q\v\t\f\a\k\t\s\d\1\l\k\d\t\s\v\r\m\g\x\f\u\4\f\g\b\u\7\8\p\5\q\2\j\7\t\x\g\k\j\b\o\w\h\h\v\8\v\i\0\5\4\u\f\u\w\w\o\z\v\p\m\d\y\m\m\7\n\g\v\k\k\b\n\a\u\5\p\5\y\u\6\y\4\m\j\o\3\4\n\a\r\z\g\l\k\b\3\2\n\r\a\q\i\y\y\x\n\t\k\q\0\l\j\m\6\q\w\3\0\x\2\f\q\z\g\u\k\v\7\5\c\a\9\w\k\3\a\j\e\2\0\w\r\n\t\4\h\y\t\r\f\q\h\2\5\n\c\m\s\h\i\e\p\u\i\a ]] 00:06:42.129 08:17:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:42.129 08:17:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:42.129 08:17:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:42.129 08:17:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:42.129 08:17:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:42.129 08:17:43 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:42.129 [2024-10-15 08:17:43.742873] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:42.129 [2024-10-15 08:17:43.743007] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60613 ] 00:06:42.388 [2024-10-15 08:17:43.883315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.388 [2024-10-15 08:17:43.964539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.388 [2024-10-15 08:17:44.040894] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.388  [2024-10-15T08:17:44.377Z] Copying: 512/512 [B] (average 500 kBps) 00:06:42.646 00:06:42.646 08:17:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qta7x9jehtcileplsgbo9dkad6vs73f46fsr9uvtqcp7mdceq80cctmwcj81s6grwgcw9sgc7cghzxss5dkuf1ntgymn1qxprfreuleuivmptmg2u32fpsoj9t3ldag15q2j9jinysbuh8ned65uajc45q6a1lmj3eo4j71w2p4ds0etmeoine116ivi3wc3xee5fivfv9yp73mz40womu0mt9rvqz6wavubdh4n5xa6qjnyf9q9b4mqcfdce615vvdpf1dbvaclz9mdg94pbwtv7wensg4pvr1ol4wh8ql72ojecnkrmjn2dae9oc71xpw81skyljrrxm2ycwbeutc77peizmcth1dxinp4zlvm6edk3l2l5z1s2pze57pukk6fs0p6snll6jc4yla8c2k6wahmv4n5yuojd2v6kc1kiy93a8g0pe9wj3up0uscrenollcmuv93xte60003by2axf4kky5a6edlpe4gb6o1gsngdiym5oe0g9ueaw93 == \q\t\a\7\x\9\j\e\h\t\c\i\l\e\p\l\s\g\b\o\9\d\k\a\d\6\v\s\7\3\f\4\6\f\s\r\9\u\v\t\q\c\p\7\m\d\c\e\q\8\0\c\c\t\m\w\c\j\8\1\s\6\g\r\w\g\c\w\9\s\g\c\7\c\g\h\z\x\s\s\5\d\k\u\f\1\n\t\g\y\m\n\1\q\x\p\r\f\r\e\u\l\e\u\i\v\m\p\t\m\g\2\u\3\2\f\p\s\o\j\9\t\3\l\d\a\g\1\5\q\2\j\9\j\i\n\y\s\b\u\h\8\n\e\d\6\5\u\a\j\c\4\5\q\6\a\1\l\m\j\3\e\o\4\j\7\1\w\2\p\4\d\s\0\e\t\m\e\o\i\n\e\1\1\6\i\v\i\3\w\c\3\x\e\e\5\f\i\v\f\v\9\y\p\7\3\m\z\4\0\w\o\m\u\0\m\t\9\r\v\q\z\6\w\a\v\u\b\d\h\4\n\5\x\a\6\q\j\n\y\f\9\q\9\b\4\m\q\c\f\d\c\e\6\1\5\v\v\d\p\f\1\d\b\v\a\c\l\z\9\m\d\g\9\4\p\b\w\t\v\7\w\e\n\s\g\4\p\v\r\1\o\l\4\w\h\8\q\l\7\2\o\j\e\c\n\k\r\m\j\n\2\d\a\e\9\o\c\7\1\x\p\w\8\1\s\k\y\l\j\r\r\x\m\2\y\c\w\b\e\u\t\c\7\7\p\e\i\z\m\c\t\h\1\d\x\i\n\p\4\z\l\v\m\6\e\d\k\3\l\2\l\5\z\1\s\2\p\z\e\5\7\p\u\k\k\6\f\s\0\p\6\s\n\l\l\6\j\c\4\y\l\a\8\c\2\k\6\w\a\h\m\v\4\n\5\y\u\o\j\d\2\v\6\k\c\1\k\i\y\9\3\a\8\g\0\p\e\9\w\j\3\u\p\0\u\s\c\r\e\n\o\l\l\c\m\u\v\9\3\x\t\e\6\0\0\0\3\b\y\2\a\x\f\4\k\k\y\5\a\6\e\d\l\p\e\4\g\b\6\o\1\g\s\n\g\d\i\y\m\5\o\e\0\g\9\u\e\a\w\9\3 ]] 00:06:42.646 08:17:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:42.646 08:17:44 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:42.905 [2024-10-15 08:17:44.428195] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:42.905 [2024-10-15 08:17:44.428348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60623 ] 00:06:42.905 [2024-10-15 08:17:44.567817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.162 [2024-10-15 08:17:44.652221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.162 [2024-10-15 08:17:44.731226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.162  [2024-10-15T08:17:45.152Z] Copying: 512/512 [B] (average 500 kBps) 00:06:43.421 00:06:43.421 08:17:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qta7x9jehtcileplsgbo9dkad6vs73f46fsr9uvtqcp7mdceq80cctmwcj81s6grwgcw9sgc7cghzxss5dkuf1ntgymn1qxprfreuleuivmptmg2u32fpsoj9t3ldag15q2j9jinysbuh8ned65uajc45q6a1lmj3eo4j71w2p4ds0etmeoine116ivi3wc3xee5fivfv9yp73mz40womu0mt9rvqz6wavubdh4n5xa6qjnyf9q9b4mqcfdce615vvdpf1dbvaclz9mdg94pbwtv7wensg4pvr1ol4wh8ql72ojecnkrmjn2dae9oc71xpw81skyljrrxm2ycwbeutc77peizmcth1dxinp4zlvm6edk3l2l5z1s2pze57pukk6fs0p6snll6jc4yla8c2k6wahmv4n5yuojd2v6kc1kiy93a8g0pe9wj3up0uscrenollcmuv93xte60003by2axf4kky5a6edlpe4gb6o1gsngdiym5oe0g9ueaw93 == \q\t\a\7\x\9\j\e\h\t\c\i\l\e\p\l\s\g\b\o\9\d\k\a\d\6\v\s\7\3\f\4\6\f\s\r\9\u\v\t\q\c\p\7\m\d\c\e\q\8\0\c\c\t\m\w\c\j\8\1\s\6\g\r\w\g\c\w\9\s\g\c\7\c\g\h\z\x\s\s\5\d\k\u\f\1\n\t\g\y\m\n\1\q\x\p\r\f\r\e\u\l\e\u\i\v\m\p\t\m\g\2\u\3\2\f\p\s\o\j\9\t\3\l\d\a\g\1\5\q\2\j\9\j\i\n\y\s\b\u\h\8\n\e\d\6\5\u\a\j\c\4\5\q\6\a\1\l\m\j\3\e\o\4\j\7\1\w\2\p\4\d\s\0\e\t\m\e\o\i\n\e\1\1\6\i\v\i\3\w\c\3\x\e\e\5\f\i\v\f\v\9\y\p\7\3\m\z\4\0\w\o\m\u\0\m\t\9\r\v\q\z\6\w\a\v\u\b\d\h\4\n\5\x\a\6\q\j\n\y\f\9\q\9\b\4\m\q\c\f\d\c\e\6\1\5\v\v\d\p\f\1\d\b\v\a\c\l\z\9\m\d\g\9\4\p\b\w\t\v\7\w\e\n\s\g\4\p\v\r\1\o\l\4\w\h\8\q\l\7\2\o\j\e\c\n\k\r\m\j\n\2\d\a\e\9\o\c\7\1\x\p\w\8\1\s\k\y\l\j\r\r\x\m\2\y\c\w\b\e\u\t\c\7\7\p\e\i\z\m\c\t\h\1\d\x\i\n\p\4\z\l\v\m\6\e\d\k\3\l\2\l\5\z\1\s\2\p\z\e\5\7\p\u\k\k\6\f\s\0\p\6\s\n\l\l\6\j\c\4\y\l\a\8\c\2\k\6\w\a\h\m\v\4\n\5\y\u\o\j\d\2\v\6\k\c\1\k\i\y\9\3\a\8\g\0\p\e\9\w\j\3\u\p\0\u\s\c\r\e\n\o\l\l\c\m\u\v\9\3\x\t\e\6\0\0\0\3\b\y\2\a\x\f\4\k\k\y\5\a\6\e\d\l\p\e\4\g\b\6\o\1\g\s\n\g\d\i\y\m\5\o\e\0\g\9\u\e\a\w\9\3 ]] 00:06:43.421 08:17:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:43.421 08:17:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:43.421 [2024-10-15 08:17:45.091334] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:43.421 [2024-10-15 08:17:45.091488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60632 ] 00:06:43.679 [2024-10-15 08:17:45.229094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.679 [2024-10-15 08:17:45.309486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.679 [2024-10-15 08:17:45.383232] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.938  [2024-10-15T08:17:45.928Z] Copying: 512/512 [B] (average 250 kBps) 00:06:44.197 00:06:44.197 08:17:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qta7x9jehtcileplsgbo9dkad6vs73f46fsr9uvtqcp7mdceq80cctmwcj81s6grwgcw9sgc7cghzxss5dkuf1ntgymn1qxprfreuleuivmptmg2u32fpsoj9t3ldag15q2j9jinysbuh8ned65uajc45q6a1lmj3eo4j71w2p4ds0etmeoine116ivi3wc3xee5fivfv9yp73mz40womu0mt9rvqz6wavubdh4n5xa6qjnyf9q9b4mqcfdce615vvdpf1dbvaclz9mdg94pbwtv7wensg4pvr1ol4wh8ql72ojecnkrmjn2dae9oc71xpw81skyljrrxm2ycwbeutc77peizmcth1dxinp4zlvm6edk3l2l5z1s2pze57pukk6fs0p6snll6jc4yla8c2k6wahmv4n5yuojd2v6kc1kiy93a8g0pe9wj3up0uscrenollcmuv93xte60003by2axf4kky5a6edlpe4gb6o1gsngdiym5oe0g9ueaw93 == \q\t\a\7\x\9\j\e\h\t\c\i\l\e\p\l\s\g\b\o\9\d\k\a\d\6\v\s\7\3\f\4\6\f\s\r\9\u\v\t\q\c\p\7\m\d\c\e\q\8\0\c\c\t\m\w\c\j\8\1\s\6\g\r\w\g\c\w\9\s\g\c\7\c\g\h\z\x\s\s\5\d\k\u\f\1\n\t\g\y\m\n\1\q\x\p\r\f\r\e\u\l\e\u\i\v\m\p\t\m\g\2\u\3\2\f\p\s\o\j\9\t\3\l\d\a\g\1\5\q\2\j\9\j\i\n\y\s\b\u\h\8\n\e\d\6\5\u\a\j\c\4\5\q\6\a\1\l\m\j\3\e\o\4\j\7\1\w\2\p\4\d\s\0\e\t\m\e\o\i\n\e\1\1\6\i\v\i\3\w\c\3\x\e\e\5\f\i\v\f\v\9\y\p\7\3\m\z\4\0\w\o\m\u\0\m\t\9\r\v\q\z\6\w\a\v\u\b\d\h\4\n\5\x\a\6\q\j\n\y\f\9\q\9\b\4\m\q\c\f\d\c\e\6\1\5\v\v\d\p\f\1\d\b\v\a\c\l\z\9\m\d\g\9\4\p\b\w\t\v\7\w\e\n\s\g\4\p\v\r\1\o\l\4\w\h\8\q\l\7\2\o\j\e\c\n\k\r\m\j\n\2\d\a\e\9\o\c\7\1\x\p\w\8\1\s\k\y\l\j\r\r\x\m\2\y\c\w\b\e\u\t\c\7\7\p\e\i\z\m\c\t\h\1\d\x\i\n\p\4\z\l\v\m\6\e\d\k\3\l\2\l\5\z\1\s\2\p\z\e\5\7\p\u\k\k\6\f\s\0\p\6\s\n\l\l\6\j\c\4\y\l\a\8\c\2\k\6\w\a\h\m\v\4\n\5\y\u\o\j\d\2\v\6\k\c\1\k\i\y\9\3\a\8\g\0\p\e\9\w\j\3\u\p\0\u\s\c\r\e\n\o\l\l\c\m\u\v\9\3\x\t\e\6\0\0\0\3\b\y\2\a\x\f\4\k\k\y\5\a\6\e\d\l\p\e\4\g\b\6\o\1\g\s\n\g\d\i\y\m\5\o\e\0\g\9\u\e\a\w\9\3 ]] 00:06:44.197 08:17:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:44.197 08:17:45 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:44.197 [2024-10-15 08:17:45.748488] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:44.197 [2024-10-15 08:17:45.748623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60647 ] 00:06:44.197 [2024-10-15 08:17:45.888409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.455 [2024-10-15 08:17:45.968896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.455 [2024-10-15 08:17:46.042919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.455  [2024-10-15T08:17:46.445Z] Copying: 512/512 [B] (average 250 kBps) 00:06:44.714 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qta7x9jehtcileplsgbo9dkad6vs73f46fsr9uvtqcp7mdceq80cctmwcj81s6grwgcw9sgc7cghzxss5dkuf1ntgymn1qxprfreuleuivmptmg2u32fpsoj9t3ldag15q2j9jinysbuh8ned65uajc45q6a1lmj3eo4j71w2p4ds0etmeoine116ivi3wc3xee5fivfv9yp73mz40womu0mt9rvqz6wavubdh4n5xa6qjnyf9q9b4mqcfdce615vvdpf1dbvaclz9mdg94pbwtv7wensg4pvr1ol4wh8ql72ojecnkrmjn2dae9oc71xpw81skyljrrxm2ycwbeutc77peizmcth1dxinp4zlvm6edk3l2l5z1s2pze57pukk6fs0p6snll6jc4yla8c2k6wahmv4n5yuojd2v6kc1kiy93a8g0pe9wj3up0uscrenollcmuv93xte60003by2axf4kky5a6edlpe4gb6o1gsngdiym5oe0g9ueaw93 == \q\t\a\7\x\9\j\e\h\t\c\i\l\e\p\l\s\g\b\o\9\d\k\a\d\6\v\s\7\3\f\4\6\f\s\r\9\u\v\t\q\c\p\7\m\d\c\e\q\8\0\c\c\t\m\w\c\j\8\1\s\6\g\r\w\g\c\w\9\s\g\c\7\c\g\h\z\x\s\s\5\d\k\u\f\1\n\t\g\y\m\n\1\q\x\p\r\f\r\e\u\l\e\u\i\v\m\p\t\m\g\2\u\3\2\f\p\s\o\j\9\t\3\l\d\a\g\1\5\q\2\j\9\j\i\n\y\s\b\u\h\8\n\e\d\6\5\u\a\j\c\4\5\q\6\a\1\l\m\j\3\e\o\4\j\7\1\w\2\p\4\d\s\0\e\t\m\e\o\i\n\e\1\1\6\i\v\i\3\w\c\3\x\e\e\5\f\i\v\f\v\9\y\p\7\3\m\z\4\0\w\o\m\u\0\m\t\9\r\v\q\z\6\w\a\v\u\b\d\h\4\n\5\x\a\6\q\j\n\y\f\9\q\9\b\4\m\q\c\f\d\c\e\6\1\5\v\v\d\p\f\1\d\b\v\a\c\l\z\9\m\d\g\9\4\p\b\w\t\v\7\w\e\n\s\g\4\p\v\r\1\o\l\4\w\h\8\q\l\7\2\o\j\e\c\n\k\r\m\j\n\2\d\a\e\9\o\c\7\1\x\p\w\8\1\s\k\y\l\j\r\r\x\m\2\y\c\w\b\e\u\t\c\7\7\p\e\i\z\m\c\t\h\1\d\x\i\n\p\4\z\l\v\m\6\e\d\k\3\l\2\l\5\z\1\s\2\p\z\e\5\7\p\u\k\k\6\f\s\0\p\6\s\n\l\l\6\j\c\4\y\l\a\8\c\2\k\6\w\a\h\m\v\4\n\5\y\u\o\j\d\2\v\6\k\c\1\k\i\y\9\3\a\8\g\0\p\e\9\w\j\3\u\p\0\u\s\c\r\e\n\o\l\l\c\m\u\v\9\3\x\t\e\6\0\0\0\3\b\y\2\a\x\f\4\k\k\y\5\a\6\e\d\l\p\e\4\g\b\6\o\1\g\s\n\g\d\i\y\m\5\o\e\0\g\9\u\e\a\w\9\3 ]] 00:06:44.714 00:06:44.714 real 0m5.362s 00:06:44.714 user 0m3.013s 00:06:44.714 sys 0m2.937s 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:44.714 ************************************ 00:06:44.714 END TEST dd_flags_misc 00:06:44.714 ************************************ 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:44.714 * Second test run, disabling liburing, forcing AIO 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:44.714 ************************************ 00:06:44.714 START TEST dd_flag_append_forced_aio 00:06:44.714 ************************************ 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=xh8cn7ug038yycfwm8wncndvxgnku8ka 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=jrtkxebw1gr36buuqpfpm1h1o49iam0o 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s xh8cn7ug038yycfwm8wncndvxgnku8ka 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s jrtkxebw1gr36buuqpfpm1h1o49iam0o 00:06:44.714 08:17:46 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:44.973 [2024-10-15 08:17:46.479298] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:44.973 [2024-10-15 08:17:46.479427] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60680 ] 00:06:44.973 [2024-10-15 08:17:46.613813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.973 [2024-10-15 08:17:46.698870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.231 [2024-10-15 08:17:46.773954] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.231  [2024-10-15T08:17:47.293Z] Copying: 32/32 [B] (average 31 kBps) 00:06:45.562 00:06:45.563 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ jrtkxebw1gr36buuqpfpm1h1o49iam0oxh8cn7ug038yycfwm8wncndvxgnku8ka == \j\r\t\k\x\e\b\w\1\g\r\3\6\b\u\u\q\p\f\p\m\1\h\1\o\4\9\i\a\m\0\o\x\h\8\c\n\7\u\g\0\3\8\y\y\c\f\w\m\8\w\n\c\n\d\v\x\g\n\k\u\8\k\a ]] 00:06:45.563 00:06:45.563 real 0m0.681s 00:06:45.563 user 0m0.369s 00:06:45.563 sys 0m0.191s 00:06:45.563 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.563 ************************************ 00:06:45.563 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:45.563 END TEST dd_flag_append_forced_aio 00:06:45.563 ************************************ 00:06:45.563 08:17:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:45.563 08:17:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.563 08:17:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.563 08:17:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:45.563 ************************************ 00:06:45.563 START TEST dd_flag_directory_forced_aio 00:06:45.563 ************************************ 00:06:45.563 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:06:45.563 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:45.563 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:45.563 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:45.563 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.563 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.563 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.563 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.563 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.563 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.563 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.563 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:45.563 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:45.563 [2024-10-15 08:17:47.218757] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:45.563 [2024-10-15 08:17:47.218900] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60702 ] 00:06:45.832 [2024-10-15 08:17:47.361801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.832 [2024-10-15 08:17:47.448325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.832 [2024-10-15 08:17:47.525528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.090 [2024-10-15 08:17:47.575147] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:46.090 [2024-10-15 08:17:47.575225] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:46.090 [2024-10-15 08:17:47.575243] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.090 [2024-10-15 08:17:47.750177] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:46.347 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:46.347 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:46.347 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:46.347 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:46.347 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:46.347 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:46.347 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:46.347 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:46.347 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:46.347 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.347 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.347 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.347 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.347 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.347 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.347 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.347 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:46.347 08:17:47 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:46.347 [2024-10-15 08:17:47.890621] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:46.347 [2024-10-15 08:17:47.890764] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60717 ] 00:06:46.347 [2024-10-15 08:17:48.025187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.604 [2024-10-15 08:17:48.107342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.604 [2024-10-15 08:17:48.182514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.604 [2024-10-15 08:17:48.232114] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:46.604 [2024-10-15 08:17:48.232196] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:46.604 [2024-10-15 08:17:48.232213] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.862 [2024-10-15 08:17:48.401326] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:46.862 00:06:46.862 real 0m1.339s 00:06:46.862 user 0m0.755s 00:06:46.862 sys 0m0.370s 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:46.862 ************************************ 00:06:46.862 END TEST dd_flag_directory_forced_aio 00:06:46.862 ************************************ 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:46.862 ************************************ 00:06:46.862 START TEST dd_flag_nofollow_forced_aio 00:06:46.862 ************************************ 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:46.862 08:17:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.119 [2024-10-15 08:17:48.599875] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:47.119 [2024-10-15 08:17:48.599976] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60751 ] 00:06:47.119 [2024-10-15 08:17:48.733884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.119 [2024-10-15 08:17:48.807909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.377 [2024-10-15 08:17:48.881530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.377 [2024-10-15 08:17:48.928994] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:47.377 [2024-10-15 08:17:48.929073] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:47.377 [2024-10-15 08:17:48.929090] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:47.377 [2024-10-15 08:17:49.092682] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:47.635 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:47.635 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:47.635 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:47.635 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:47.635 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:47.635 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:47.635 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:47.635 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:47.635 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:47.635 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.635 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.635 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.635 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.635 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.635 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.635 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.635 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:47.635 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:47.635 [2024-10-15 08:17:49.242752] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:47.635 [2024-10-15 08:17:49.243110] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60755 ] 00:06:47.893 [2024-10-15 08:17:49.381666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.893 [2024-10-15 08:17:49.465584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.893 [2024-10-15 08:17:49.538549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.893 [2024-10-15 08:17:49.590128] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:47.893 [2024-10-15 08:17:49.590202] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:47.893 [2024-10-15 08:17:49.590219] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.152 [2024-10-15 08:17:49.761005] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:48.152 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:48.152 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.152 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:48.152 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:48.152 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:48.152 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.152 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:48.152 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:48.152 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:48.152 08:17:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:48.410 [2024-10-15 08:17:49.918081] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:48.410 [2024-10-15 08:17:49.918240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60768 ] 00:06:48.410 [2024-10-15 08:17:50.058475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.705 [2024-10-15 08:17:50.141903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.705 [2024-10-15 08:17:50.219252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.705  [2024-10-15T08:17:50.694Z] Copying: 512/512 [B] (average 500 kBps) 00:06:48.963 00:06:48.964 08:17:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 2w6plxd1tca2t0pg5x9va7h0d77duuwih57t52p6obe1gy0co6lvjttdgg8vt36b12uxxaftavwjfpj8h2m1sw2o5s18xaxge6rg7l002bawwnx3vwruujdnl3mbyogt0fgvrngixkvtvkiotsi1w1yqeodjfv86vd4dezvzqa4l9eex2g1mvo0l6xumxdzjmbqohb47rcd735kfj22z1fqi9t3izviynzo44n3isf1ldtvfk9o4de1gjq5w98y3vniws55jzbcb0g4ygta0pcvc1ivrfqn4j02kvzp41rjizvlnq9jq7vui4z4ew19y6oofnlmw7tv7qjtfaq8ophucofa3bo989a8ykox679lam2b4y9hxa2e8wsvlj0fhecnqxtfpfqilbxybrnv25u9h17wv4pafj6xc88y90sebjv7a6ihq55ioe1h5tso1prjcyl94e0zgeapl63bv9iim2k21l5h8fdfgkcqazun79m839e9h1ryxxolporju == \2\w\6\p\l\x\d\1\t\c\a\2\t\0\p\g\5\x\9\v\a\7\h\0\d\7\7\d\u\u\w\i\h\5\7\t\5\2\p\6\o\b\e\1\g\y\0\c\o\6\l\v\j\t\t\d\g\g\8\v\t\3\6\b\1\2\u\x\x\a\f\t\a\v\w\j\f\p\j\8\h\2\m\1\s\w\2\o\5\s\1\8\x\a\x\g\e\6\r\g\7\l\0\0\2\b\a\w\w\n\x\3\v\w\r\u\u\j\d\n\l\3\m\b\y\o\g\t\0\f\g\v\r\n\g\i\x\k\v\t\v\k\i\o\t\s\i\1\w\1\y\q\e\o\d\j\f\v\8\6\v\d\4\d\e\z\v\z\q\a\4\l\9\e\e\x\2\g\1\m\v\o\0\l\6\x\u\m\x\d\z\j\m\b\q\o\h\b\4\7\r\c\d\7\3\5\k\f\j\2\2\z\1\f\q\i\9\t\3\i\z\v\i\y\n\z\o\4\4\n\3\i\s\f\1\l\d\t\v\f\k\9\o\4\d\e\1\g\j\q\5\w\9\8\y\3\v\n\i\w\s\5\5\j\z\b\c\b\0\g\4\y\g\t\a\0\p\c\v\c\1\i\v\r\f\q\n\4\j\0\2\k\v\z\p\4\1\r\j\i\z\v\l\n\q\9\j\q\7\v\u\i\4\z\4\e\w\1\9\y\6\o\o\f\n\l\m\w\7\t\v\7\q\j\t\f\a\q\8\o\p\h\u\c\o\f\a\3\b\o\9\8\9\a\8\y\k\o\x\6\7\9\l\a\m\2\b\4\y\9\h\x\a\2\e\8\w\s\v\l\j\0\f\h\e\c\n\q\x\t\f\p\f\q\i\l\b\x\y\b\r\n\v\2\5\u\9\h\1\7\w\v\4\p\a\f\j\6\x\c\8\8\y\9\0\s\e\b\j\v\7\a\6\i\h\q\5\5\i\o\e\1\h\5\t\s\o\1\p\r\j\c\y\l\9\4\e\0\z\g\e\a\p\l\6\3\b\v\9\i\i\m\2\k\2\1\l\5\h\8\f\d\f\g\k\c\q\a\z\u\n\7\9\m\8\3\9\e\9\h\1\r\y\x\x\o\l\p\o\r\j\u ]] 00:06:48.964 00:06:48.964 real 0m2.020s 00:06:48.964 user 0m1.144s 00:06:48.964 sys 0m0.540s 00:06:48.964 08:17:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.964 08:17:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:48.964 ************************************ 00:06:48.964 END TEST dd_flag_nofollow_forced_aio 00:06:48.964 ************************************ 00:06:48.964 08:17:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:48.964 08:17:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.964 08:17:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.964 08:17:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:48.964 ************************************ 00:06:48.964 START TEST dd_flag_noatime_forced_aio 00:06:48.964 ************************************ 00:06:48.964 08:17:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:06:48.964 08:17:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:48.964 08:17:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:48.964 08:17:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:48.964 08:17:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:48.964 08:17:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:48.964 08:17:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:48.964 08:17:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1728980270 00:06:48.964 08:17:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:48.964 08:17:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1728980270 00:06:48.964 08:17:50 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:50.340 08:17:51 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:50.340 [2024-10-15 08:17:51.733083] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:50.340 [2024-10-15 08:17:51.733340] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60814 ] 00:06:50.340 [2024-10-15 08:17:51.885461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.340 [2024-10-15 08:17:51.979156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.340 [2024-10-15 08:17:52.056250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.599  [2024-10-15T08:17:52.588Z] Copying: 512/512 [B] (average 500 kBps) 00:06:50.857 00:06:50.857 08:17:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:50.858 08:17:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1728980270 )) 00:06:50.858 08:17:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:50.858 08:17:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1728980270 )) 00:06:50.858 08:17:52 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:50.858 [2024-10-15 08:17:52.465966] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:50.858 [2024-10-15 08:17:52.466163] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60820 ] 00:06:51.116 [2024-10-15 08:17:52.610516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.116 [2024-10-15 08:17:52.688308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.116 [2024-10-15 08:17:52.761346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.116  [2024-10-15T08:17:53.105Z] Copying: 512/512 [B] (average 500 kBps) 00:06:51.374 00:06:51.374 08:17:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:51.374 ************************************ 00:06:51.374 END TEST dd_flag_noatime_forced_aio 00:06:51.374 ************************************ 00:06:51.374 08:17:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1728980272 )) 00:06:51.374 00:06:51.374 real 0m2.472s 00:06:51.374 user 0m0.815s 00:06:51.374 sys 0m0.406s 00:06:51.374 08:17:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.374 08:17:53 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:51.632 08:17:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:51.632 08:17:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.632 08:17:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.632 08:17:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:51.632 ************************************ 00:06:51.632 START TEST dd_flags_misc_forced_aio 00:06:51.632 ************************************ 00:06:51.632 08:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:06:51.632 08:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:51.632 08:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:51.632 08:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:51.632 08:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:51.632 08:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:51.632 08:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:51.632 08:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:51.632 08:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:51.632 08:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:51.632 [2024-10-15 08:17:53.200374] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:51.632 [2024-10-15 08:17:53.200488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60853 ] 00:06:51.632 [2024-10-15 08:17:53.337055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.890 [2024-10-15 08:17:53.422348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.890 [2024-10-15 08:17:53.498382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.890  [2024-10-15T08:17:53.879Z] Copying: 512/512 [B] (average 500 kBps) 00:06:52.148 00:06:52.148 08:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xdhfzoe44jug6nsohwxt5of9q0anlle3zo6nzsjn9izvtujc4txg8je5vyjwst5sp61wo1xek354g20ys362853r68zepc7bdfk7ltke5lkjatzd5kabguiqkjknidwyph9bdcpuuzwdg41s0pl9t2nianrae1lltb8fgzuo8wigzffmlh4o0cru18cj2trtd7e4vlhtq7g5q12jokv5o1xj7k5vkontmgfc38f3v3uq0jm70on9r82cv06f39xe4l0dxz2mw6nx9z4cotf49of73nozb3gyah9ozb3seojk8vd93uy1ewb5nmto1pegw9jb16z4k7jhw7hcdb3467qsyhk2upsrortxuvzx305tnq311sq12e9l43ppla3ueyxytrqwq5tbk26m86l4a7v89jc1wx8npxt1rfpeukmaz5t3mv0kw6gna6a72a3z406spml2p8d3lcthin9s6ar44mzuhcf615hy826rf1m973w2516vs0io1sxdfutn == \x\d\h\f\z\o\e\4\4\j\u\g\6\n\s\o\h\w\x\t\5\o\f\9\q\0\a\n\l\l\e\3\z\o\6\n\z\s\j\n\9\i\z\v\t\u\j\c\4\t\x\g\8\j\e\5\v\y\j\w\s\t\5\s\p\6\1\w\o\1\x\e\k\3\5\4\g\2\0\y\s\3\6\2\8\5\3\r\6\8\z\e\p\c\7\b\d\f\k\7\l\t\k\e\5\l\k\j\a\t\z\d\5\k\a\b\g\u\i\q\k\j\k\n\i\d\w\y\p\h\9\b\d\c\p\u\u\z\w\d\g\4\1\s\0\p\l\9\t\2\n\i\a\n\r\a\e\1\l\l\t\b\8\f\g\z\u\o\8\w\i\g\z\f\f\m\l\h\4\o\0\c\r\u\1\8\c\j\2\t\r\t\d\7\e\4\v\l\h\t\q\7\g\5\q\1\2\j\o\k\v\5\o\1\x\j\7\k\5\v\k\o\n\t\m\g\f\c\3\8\f\3\v\3\u\q\0\j\m\7\0\o\n\9\r\8\2\c\v\0\6\f\3\9\x\e\4\l\0\d\x\z\2\m\w\6\n\x\9\z\4\c\o\t\f\4\9\o\f\7\3\n\o\z\b\3\g\y\a\h\9\o\z\b\3\s\e\o\j\k\8\v\d\9\3\u\y\1\e\w\b\5\n\m\t\o\1\p\e\g\w\9\j\b\1\6\z\4\k\7\j\h\w\7\h\c\d\b\3\4\6\7\q\s\y\h\k\2\u\p\s\r\o\r\t\x\u\v\z\x\3\0\5\t\n\q\3\1\1\s\q\1\2\e\9\l\4\3\p\p\l\a\3\u\e\y\x\y\t\r\q\w\q\5\t\b\k\2\6\m\8\6\l\4\a\7\v\8\9\j\c\1\w\x\8\n\p\x\t\1\r\f\p\e\u\k\m\a\z\5\t\3\m\v\0\k\w\6\g\n\a\6\a\7\2\a\3\z\4\0\6\s\p\m\l\2\p\8\d\3\l\c\t\h\i\n\9\s\6\a\r\4\4\m\z\u\h\c\f\6\1\5\h\y\8\2\6\r\f\1\m\9\7\3\w\2\5\1\6\v\s\0\i\o\1\s\x\d\f\u\t\n ]] 00:06:52.148 08:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:52.148 08:17:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:52.406 [2024-10-15 08:17:53.934692] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:52.406 [2024-10-15 08:17:53.935197] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60866 ] 00:06:52.406 [2024-10-15 08:17:54.077464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.664 [2024-10-15 08:17:54.157518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.664 [2024-10-15 08:17:54.230669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.664  [2024-10-15T08:17:54.654Z] Copying: 512/512 [B] (average 500 kBps) 00:06:52.923 00:06:52.923 08:17:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xdhfzoe44jug6nsohwxt5of9q0anlle3zo6nzsjn9izvtujc4txg8je5vyjwst5sp61wo1xek354g20ys362853r68zepc7bdfk7ltke5lkjatzd5kabguiqkjknidwyph9bdcpuuzwdg41s0pl9t2nianrae1lltb8fgzuo8wigzffmlh4o0cru18cj2trtd7e4vlhtq7g5q12jokv5o1xj7k5vkontmgfc38f3v3uq0jm70on9r82cv06f39xe4l0dxz2mw6nx9z4cotf49of73nozb3gyah9ozb3seojk8vd93uy1ewb5nmto1pegw9jb16z4k7jhw7hcdb3467qsyhk2upsrortxuvzx305tnq311sq12e9l43ppla3ueyxytrqwq5tbk26m86l4a7v89jc1wx8npxt1rfpeukmaz5t3mv0kw6gna6a72a3z406spml2p8d3lcthin9s6ar44mzuhcf615hy826rf1m973w2516vs0io1sxdfutn == \x\d\h\f\z\o\e\4\4\j\u\g\6\n\s\o\h\w\x\t\5\o\f\9\q\0\a\n\l\l\e\3\z\o\6\n\z\s\j\n\9\i\z\v\t\u\j\c\4\t\x\g\8\j\e\5\v\y\j\w\s\t\5\s\p\6\1\w\o\1\x\e\k\3\5\4\g\2\0\y\s\3\6\2\8\5\3\r\6\8\z\e\p\c\7\b\d\f\k\7\l\t\k\e\5\l\k\j\a\t\z\d\5\k\a\b\g\u\i\q\k\j\k\n\i\d\w\y\p\h\9\b\d\c\p\u\u\z\w\d\g\4\1\s\0\p\l\9\t\2\n\i\a\n\r\a\e\1\l\l\t\b\8\f\g\z\u\o\8\w\i\g\z\f\f\m\l\h\4\o\0\c\r\u\1\8\c\j\2\t\r\t\d\7\e\4\v\l\h\t\q\7\g\5\q\1\2\j\o\k\v\5\o\1\x\j\7\k\5\v\k\o\n\t\m\g\f\c\3\8\f\3\v\3\u\q\0\j\m\7\0\o\n\9\r\8\2\c\v\0\6\f\3\9\x\e\4\l\0\d\x\z\2\m\w\6\n\x\9\z\4\c\o\t\f\4\9\o\f\7\3\n\o\z\b\3\g\y\a\h\9\o\z\b\3\s\e\o\j\k\8\v\d\9\3\u\y\1\e\w\b\5\n\m\t\o\1\p\e\g\w\9\j\b\1\6\z\4\k\7\j\h\w\7\h\c\d\b\3\4\6\7\q\s\y\h\k\2\u\p\s\r\o\r\t\x\u\v\z\x\3\0\5\t\n\q\3\1\1\s\q\1\2\e\9\l\4\3\p\p\l\a\3\u\e\y\x\y\t\r\q\w\q\5\t\b\k\2\6\m\8\6\l\4\a\7\v\8\9\j\c\1\w\x\8\n\p\x\t\1\r\f\p\e\u\k\m\a\z\5\t\3\m\v\0\k\w\6\g\n\a\6\a\7\2\a\3\z\4\0\6\s\p\m\l\2\p\8\d\3\l\c\t\h\i\n\9\s\6\a\r\4\4\m\z\u\h\c\f\6\1\5\h\y\8\2\6\r\f\1\m\9\7\3\w\2\5\1\6\v\s\0\i\o\1\s\x\d\f\u\t\n ]] 00:06:52.923 08:17:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:52.923 08:17:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:52.923 [2024-10-15 08:17:54.623107] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:52.923 [2024-10-15 08:17:54.623238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60868 ] 00:06:53.182 [2024-10-15 08:17:54.763107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.182 [2024-10-15 08:17:54.842245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.441 [2024-10-15 08:17:54.915076] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.441  [2024-10-15T08:17:55.431Z] Copying: 512/512 [B] (average 166 kBps) 00:06:53.700 00:06:53.700 08:17:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xdhfzoe44jug6nsohwxt5of9q0anlle3zo6nzsjn9izvtujc4txg8je5vyjwst5sp61wo1xek354g20ys362853r68zepc7bdfk7ltke5lkjatzd5kabguiqkjknidwyph9bdcpuuzwdg41s0pl9t2nianrae1lltb8fgzuo8wigzffmlh4o0cru18cj2trtd7e4vlhtq7g5q12jokv5o1xj7k5vkontmgfc38f3v3uq0jm70on9r82cv06f39xe4l0dxz2mw6nx9z4cotf49of73nozb3gyah9ozb3seojk8vd93uy1ewb5nmto1pegw9jb16z4k7jhw7hcdb3467qsyhk2upsrortxuvzx305tnq311sq12e9l43ppla3ueyxytrqwq5tbk26m86l4a7v89jc1wx8npxt1rfpeukmaz5t3mv0kw6gna6a72a3z406spml2p8d3lcthin9s6ar44mzuhcf615hy826rf1m973w2516vs0io1sxdfutn == \x\d\h\f\z\o\e\4\4\j\u\g\6\n\s\o\h\w\x\t\5\o\f\9\q\0\a\n\l\l\e\3\z\o\6\n\z\s\j\n\9\i\z\v\t\u\j\c\4\t\x\g\8\j\e\5\v\y\j\w\s\t\5\s\p\6\1\w\o\1\x\e\k\3\5\4\g\2\0\y\s\3\6\2\8\5\3\r\6\8\z\e\p\c\7\b\d\f\k\7\l\t\k\e\5\l\k\j\a\t\z\d\5\k\a\b\g\u\i\q\k\j\k\n\i\d\w\y\p\h\9\b\d\c\p\u\u\z\w\d\g\4\1\s\0\p\l\9\t\2\n\i\a\n\r\a\e\1\l\l\t\b\8\f\g\z\u\o\8\w\i\g\z\f\f\m\l\h\4\o\0\c\r\u\1\8\c\j\2\t\r\t\d\7\e\4\v\l\h\t\q\7\g\5\q\1\2\j\o\k\v\5\o\1\x\j\7\k\5\v\k\o\n\t\m\g\f\c\3\8\f\3\v\3\u\q\0\j\m\7\0\o\n\9\r\8\2\c\v\0\6\f\3\9\x\e\4\l\0\d\x\z\2\m\w\6\n\x\9\z\4\c\o\t\f\4\9\o\f\7\3\n\o\z\b\3\g\y\a\h\9\o\z\b\3\s\e\o\j\k\8\v\d\9\3\u\y\1\e\w\b\5\n\m\t\o\1\p\e\g\w\9\j\b\1\6\z\4\k\7\j\h\w\7\h\c\d\b\3\4\6\7\q\s\y\h\k\2\u\p\s\r\o\r\t\x\u\v\z\x\3\0\5\t\n\q\3\1\1\s\q\1\2\e\9\l\4\3\p\p\l\a\3\u\e\y\x\y\t\r\q\w\q\5\t\b\k\2\6\m\8\6\l\4\a\7\v\8\9\j\c\1\w\x\8\n\p\x\t\1\r\f\p\e\u\k\m\a\z\5\t\3\m\v\0\k\w\6\g\n\a\6\a\7\2\a\3\z\4\0\6\s\p\m\l\2\p\8\d\3\l\c\t\h\i\n\9\s\6\a\r\4\4\m\z\u\h\c\f\6\1\5\h\y\8\2\6\r\f\1\m\9\7\3\w\2\5\1\6\v\s\0\i\o\1\s\x\d\f\u\t\n ]] 00:06:53.700 08:17:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:53.700 08:17:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:53.700 [2024-10-15 08:17:55.301928] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:53.700 [2024-10-15 08:17:55.302073] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60881 ] 00:06:53.959 [2024-10-15 08:17:55.443076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.959 [2024-10-15 08:17:55.521565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.959 [2024-10-15 08:17:55.593837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.959  [2024-10-15T08:17:55.948Z] Copying: 512/512 [B] (average 500 kBps) 00:06:54.218 00:06:54.218 08:17:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ xdhfzoe44jug6nsohwxt5of9q0anlle3zo6nzsjn9izvtujc4txg8je5vyjwst5sp61wo1xek354g20ys362853r68zepc7bdfk7ltke5lkjatzd5kabguiqkjknidwyph9bdcpuuzwdg41s0pl9t2nianrae1lltb8fgzuo8wigzffmlh4o0cru18cj2trtd7e4vlhtq7g5q12jokv5o1xj7k5vkontmgfc38f3v3uq0jm70on9r82cv06f39xe4l0dxz2mw6nx9z4cotf49of73nozb3gyah9ozb3seojk8vd93uy1ewb5nmto1pegw9jb16z4k7jhw7hcdb3467qsyhk2upsrortxuvzx305tnq311sq12e9l43ppla3ueyxytrqwq5tbk26m86l4a7v89jc1wx8npxt1rfpeukmaz5t3mv0kw6gna6a72a3z406spml2p8d3lcthin9s6ar44mzuhcf615hy826rf1m973w2516vs0io1sxdfutn == \x\d\h\f\z\o\e\4\4\j\u\g\6\n\s\o\h\w\x\t\5\o\f\9\q\0\a\n\l\l\e\3\z\o\6\n\z\s\j\n\9\i\z\v\t\u\j\c\4\t\x\g\8\j\e\5\v\y\j\w\s\t\5\s\p\6\1\w\o\1\x\e\k\3\5\4\g\2\0\y\s\3\6\2\8\5\3\r\6\8\z\e\p\c\7\b\d\f\k\7\l\t\k\e\5\l\k\j\a\t\z\d\5\k\a\b\g\u\i\q\k\j\k\n\i\d\w\y\p\h\9\b\d\c\p\u\u\z\w\d\g\4\1\s\0\p\l\9\t\2\n\i\a\n\r\a\e\1\l\l\t\b\8\f\g\z\u\o\8\w\i\g\z\f\f\m\l\h\4\o\0\c\r\u\1\8\c\j\2\t\r\t\d\7\e\4\v\l\h\t\q\7\g\5\q\1\2\j\o\k\v\5\o\1\x\j\7\k\5\v\k\o\n\t\m\g\f\c\3\8\f\3\v\3\u\q\0\j\m\7\0\o\n\9\r\8\2\c\v\0\6\f\3\9\x\e\4\l\0\d\x\z\2\m\w\6\n\x\9\z\4\c\o\t\f\4\9\o\f\7\3\n\o\z\b\3\g\y\a\h\9\o\z\b\3\s\e\o\j\k\8\v\d\9\3\u\y\1\e\w\b\5\n\m\t\o\1\p\e\g\w\9\j\b\1\6\z\4\k\7\j\h\w\7\h\c\d\b\3\4\6\7\q\s\y\h\k\2\u\p\s\r\o\r\t\x\u\v\z\x\3\0\5\t\n\q\3\1\1\s\q\1\2\e\9\l\4\3\p\p\l\a\3\u\e\y\x\y\t\r\q\w\q\5\t\b\k\2\6\m\8\6\l\4\a\7\v\8\9\j\c\1\w\x\8\n\p\x\t\1\r\f\p\e\u\k\m\a\z\5\t\3\m\v\0\k\w\6\g\n\a\6\a\7\2\a\3\z\4\0\6\s\p\m\l\2\p\8\d\3\l\c\t\h\i\n\9\s\6\a\r\4\4\m\z\u\h\c\f\6\1\5\h\y\8\2\6\r\f\1\m\9\7\3\w\2\5\1\6\v\s\0\i\o\1\s\x\d\f\u\t\n ]] 00:06:54.218 08:17:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:54.218 08:17:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:54.218 08:17:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:54.218 08:17:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:54.218 08:17:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:54.218 08:17:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:54.477 [2024-10-15 08:17:56.008426] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:54.477 [2024-10-15 08:17:56.008604] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60894 ] 00:06:54.477 [2024-10-15 08:17:56.150946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.736 [2024-10-15 08:17:56.233072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.736 [2024-10-15 08:17:56.305393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.736  [2024-10-15T08:17:56.726Z] Copying: 512/512 [B] (average 500 kBps) 00:06:54.995 00:06:54.995 08:17:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ sh5z8r729vsxjq5u8qe0j2cquarc9218259lu1bagts33ekptwnbvek33tylgc1lvi9qvxivfrf6whseucvy17k5a36wgauugfn2dum7wgyqx0tfo3y49aoqreuz0rr6zi43zunwm8rrx9i1hessh3hdk6d47fpzuac8zocjqivl82s02bcrd69xh8f31vlzrny6v5g6yy3ez1rwewg40qbl56578gnhqdc1d3y8v2z6wk05i4uxi353b3azcjpj163i21ze5q3w3pb5enqflfhynmfmpifdne1agqmc608cmej6fzohmb2a8ho8ydlfr1wammts4w762t9dbd5rvu9lss9n3g4pgapr021s5ypl5dzagir0yxzdozm6quyecgtikfekoq2xta5lm1p4pd8364ymhdoeodjsz1ufmd74noxjow42ae6ih5xlhkgya8zy5z8fa2d72vqziyxytuc3596jnu8pozx5vxckahx9slwklf6a3eux324ijh79 == \s\h\5\z\8\r\7\2\9\v\s\x\j\q\5\u\8\q\e\0\j\2\c\q\u\a\r\c\9\2\1\8\2\5\9\l\u\1\b\a\g\t\s\3\3\e\k\p\t\w\n\b\v\e\k\3\3\t\y\l\g\c\1\l\v\i\9\q\v\x\i\v\f\r\f\6\w\h\s\e\u\c\v\y\1\7\k\5\a\3\6\w\g\a\u\u\g\f\n\2\d\u\m\7\w\g\y\q\x\0\t\f\o\3\y\4\9\a\o\q\r\e\u\z\0\r\r\6\z\i\4\3\z\u\n\w\m\8\r\r\x\9\i\1\h\e\s\s\h\3\h\d\k\6\d\4\7\f\p\z\u\a\c\8\z\o\c\j\q\i\v\l\8\2\s\0\2\b\c\r\d\6\9\x\h\8\f\3\1\v\l\z\r\n\y\6\v\5\g\6\y\y\3\e\z\1\r\w\e\w\g\4\0\q\b\l\5\6\5\7\8\g\n\h\q\d\c\1\d\3\y\8\v\2\z\6\w\k\0\5\i\4\u\x\i\3\5\3\b\3\a\z\c\j\p\j\1\6\3\i\2\1\z\e\5\q\3\w\3\p\b\5\e\n\q\f\l\f\h\y\n\m\f\m\p\i\f\d\n\e\1\a\g\q\m\c\6\0\8\c\m\e\j\6\f\z\o\h\m\b\2\a\8\h\o\8\y\d\l\f\r\1\w\a\m\m\t\s\4\w\7\6\2\t\9\d\b\d\5\r\v\u\9\l\s\s\9\n\3\g\4\p\g\a\p\r\0\2\1\s\5\y\p\l\5\d\z\a\g\i\r\0\y\x\z\d\o\z\m\6\q\u\y\e\c\g\t\i\k\f\e\k\o\q\2\x\t\a\5\l\m\1\p\4\p\d\8\3\6\4\y\m\h\d\o\e\o\d\j\s\z\1\u\f\m\d\7\4\n\o\x\j\o\w\4\2\a\e\6\i\h\5\x\l\h\k\g\y\a\8\z\y\5\z\8\f\a\2\d\7\2\v\q\z\i\y\x\y\t\u\c\3\5\9\6\j\n\u\8\p\o\z\x\5\v\x\c\k\a\h\x\9\s\l\w\k\l\f\6\a\3\e\u\x\3\2\4\i\j\h\7\9 ]] 00:06:54.995 08:17:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:54.995 08:17:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:54.995 [2024-10-15 08:17:56.700781] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:54.995 [2024-10-15 08:17:56.701286] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60896 ] 00:06:55.253 [2024-10-15 08:17:56.837018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.253 [2024-10-15 08:17:56.918593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.512 [2024-10-15 08:17:56.992759] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.512  [2024-10-15T08:17:57.502Z] Copying: 512/512 [B] (average 500 kBps) 00:06:55.771 00:06:55.771 08:17:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ sh5z8r729vsxjq5u8qe0j2cquarc9218259lu1bagts33ekptwnbvek33tylgc1lvi9qvxivfrf6whseucvy17k5a36wgauugfn2dum7wgyqx0tfo3y49aoqreuz0rr6zi43zunwm8rrx9i1hessh3hdk6d47fpzuac8zocjqivl82s02bcrd69xh8f31vlzrny6v5g6yy3ez1rwewg40qbl56578gnhqdc1d3y8v2z6wk05i4uxi353b3azcjpj163i21ze5q3w3pb5enqflfhynmfmpifdne1agqmc608cmej6fzohmb2a8ho8ydlfr1wammts4w762t9dbd5rvu9lss9n3g4pgapr021s5ypl5dzagir0yxzdozm6quyecgtikfekoq2xta5lm1p4pd8364ymhdoeodjsz1ufmd74noxjow42ae6ih5xlhkgya8zy5z8fa2d72vqziyxytuc3596jnu8pozx5vxckahx9slwklf6a3eux324ijh79 == \s\h\5\z\8\r\7\2\9\v\s\x\j\q\5\u\8\q\e\0\j\2\c\q\u\a\r\c\9\2\1\8\2\5\9\l\u\1\b\a\g\t\s\3\3\e\k\p\t\w\n\b\v\e\k\3\3\t\y\l\g\c\1\l\v\i\9\q\v\x\i\v\f\r\f\6\w\h\s\e\u\c\v\y\1\7\k\5\a\3\6\w\g\a\u\u\g\f\n\2\d\u\m\7\w\g\y\q\x\0\t\f\o\3\y\4\9\a\o\q\r\e\u\z\0\r\r\6\z\i\4\3\z\u\n\w\m\8\r\r\x\9\i\1\h\e\s\s\h\3\h\d\k\6\d\4\7\f\p\z\u\a\c\8\z\o\c\j\q\i\v\l\8\2\s\0\2\b\c\r\d\6\9\x\h\8\f\3\1\v\l\z\r\n\y\6\v\5\g\6\y\y\3\e\z\1\r\w\e\w\g\4\0\q\b\l\5\6\5\7\8\g\n\h\q\d\c\1\d\3\y\8\v\2\z\6\w\k\0\5\i\4\u\x\i\3\5\3\b\3\a\z\c\j\p\j\1\6\3\i\2\1\z\e\5\q\3\w\3\p\b\5\e\n\q\f\l\f\h\y\n\m\f\m\p\i\f\d\n\e\1\a\g\q\m\c\6\0\8\c\m\e\j\6\f\z\o\h\m\b\2\a\8\h\o\8\y\d\l\f\r\1\w\a\m\m\t\s\4\w\7\6\2\t\9\d\b\d\5\r\v\u\9\l\s\s\9\n\3\g\4\p\g\a\p\r\0\2\1\s\5\y\p\l\5\d\z\a\g\i\r\0\y\x\z\d\o\z\m\6\q\u\y\e\c\g\t\i\k\f\e\k\o\q\2\x\t\a\5\l\m\1\p\4\p\d\8\3\6\4\y\m\h\d\o\e\o\d\j\s\z\1\u\f\m\d\7\4\n\o\x\j\o\w\4\2\a\e\6\i\h\5\x\l\h\k\g\y\a\8\z\y\5\z\8\f\a\2\d\7\2\v\q\z\i\y\x\y\t\u\c\3\5\9\6\j\n\u\8\p\o\z\x\5\v\x\c\k\a\h\x\9\s\l\w\k\l\f\6\a\3\e\u\x\3\2\4\i\j\h\7\9 ]] 00:06:55.771 08:17:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:55.771 08:17:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:55.771 [2024-10-15 08:17:57.400160] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:55.771 [2024-10-15 08:17:57.400317] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60909 ] 00:06:56.030 [2024-10-15 08:17:57.545241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.030 [2024-10-15 08:17:57.629669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.030 [2024-10-15 08:17:57.703457] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.030  [2024-10-15T08:17:58.019Z] Copying: 512/512 [B] (average 250 kBps) 00:06:56.288 00:06:56.546 08:17:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ sh5z8r729vsxjq5u8qe0j2cquarc9218259lu1bagts33ekptwnbvek33tylgc1lvi9qvxivfrf6whseucvy17k5a36wgauugfn2dum7wgyqx0tfo3y49aoqreuz0rr6zi43zunwm8rrx9i1hessh3hdk6d47fpzuac8zocjqivl82s02bcrd69xh8f31vlzrny6v5g6yy3ez1rwewg40qbl56578gnhqdc1d3y8v2z6wk05i4uxi353b3azcjpj163i21ze5q3w3pb5enqflfhynmfmpifdne1agqmc608cmej6fzohmb2a8ho8ydlfr1wammts4w762t9dbd5rvu9lss9n3g4pgapr021s5ypl5dzagir0yxzdozm6quyecgtikfekoq2xta5lm1p4pd8364ymhdoeodjsz1ufmd74noxjow42ae6ih5xlhkgya8zy5z8fa2d72vqziyxytuc3596jnu8pozx5vxckahx9slwklf6a3eux324ijh79 == \s\h\5\z\8\r\7\2\9\v\s\x\j\q\5\u\8\q\e\0\j\2\c\q\u\a\r\c\9\2\1\8\2\5\9\l\u\1\b\a\g\t\s\3\3\e\k\p\t\w\n\b\v\e\k\3\3\t\y\l\g\c\1\l\v\i\9\q\v\x\i\v\f\r\f\6\w\h\s\e\u\c\v\y\1\7\k\5\a\3\6\w\g\a\u\u\g\f\n\2\d\u\m\7\w\g\y\q\x\0\t\f\o\3\y\4\9\a\o\q\r\e\u\z\0\r\r\6\z\i\4\3\z\u\n\w\m\8\r\r\x\9\i\1\h\e\s\s\h\3\h\d\k\6\d\4\7\f\p\z\u\a\c\8\z\o\c\j\q\i\v\l\8\2\s\0\2\b\c\r\d\6\9\x\h\8\f\3\1\v\l\z\r\n\y\6\v\5\g\6\y\y\3\e\z\1\r\w\e\w\g\4\0\q\b\l\5\6\5\7\8\g\n\h\q\d\c\1\d\3\y\8\v\2\z\6\w\k\0\5\i\4\u\x\i\3\5\3\b\3\a\z\c\j\p\j\1\6\3\i\2\1\z\e\5\q\3\w\3\p\b\5\e\n\q\f\l\f\h\y\n\m\f\m\p\i\f\d\n\e\1\a\g\q\m\c\6\0\8\c\m\e\j\6\f\z\o\h\m\b\2\a\8\h\o\8\y\d\l\f\r\1\w\a\m\m\t\s\4\w\7\6\2\t\9\d\b\d\5\r\v\u\9\l\s\s\9\n\3\g\4\p\g\a\p\r\0\2\1\s\5\y\p\l\5\d\z\a\g\i\r\0\y\x\z\d\o\z\m\6\q\u\y\e\c\g\t\i\k\f\e\k\o\q\2\x\t\a\5\l\m\1\p\4\p\d\8\3\6\4\y\m\h\d\o\e\o\d\j\s\z\1\u\f\m\d\7\4\n\o\x\j\o\w\4\2\a\e\6\i\h\5\x\l\h\k\g\y\a\8\z\y\5\z\8\f\a\2\d\7\2\v\q\z\i\y\x\y\t\u\c\3\5\9\6\j\n\u\8\p\o\z\x\5\v\x\c\k\a\h\x\9\s\l\w\k\l\f\6\a\3\e\u\x\3\2\4\i\j\h\7\9 ]] 00:06:56.546 08:17:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:56.546 08:17:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:56.546 [2024-10-15 08:17:58.082732] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:56.546 [2024-10-15 08:17:58.082858] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60922 ] 00:06:56.546 [2024-10-15 08:17:58.216770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.805 [2024-10-15 08:17:58.296717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.805 [2024-10-15 08:17:58.369200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.805  [2024-10-15T08:17:58.796Z] Copying: 512/512 [B] (average 500 kBps) 00:06:57.065 00:06:57.065 08:17:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ sh5z8r729vsxjq5u8qe0j2cquarc9218259lu1bagts33ekptwnbvek33tylgc1lvi9qvxivfrf6whseucvy17k5a36wgauugfn2dum7wgyqx0tfo3y49aoqreuz0rr6zi43zunwm8rrx9i1hessh3hdk6d47fpzuac8zocjqivl82s02bcrd69xh8f31vlzrny6v5g6yy3ez1rwewg40qbl56578gnhqdc1d3y8v2z6wk05i4uxi353b3azcjpj163i21ze5q3w3pb5enqflfhynmfmpifdne1agqmc608cmej6fzohmb2a8ho8ydlfr1wammts4w762t9dbd5rvu9lss9n3g4pgapr021s5ypl5dzagir0yxzdozm6quyecgtikfekoq2xta5lm1p4pd8364ymhdoeodjsz1ufmd74noxjow42ae6ih5xlhkgya8zy5z8fa2d72vqziyxytuc3596jnu8pozx5vxckahx9slwklf6a3eux324ijh79 == \s\h\5\z\8\r\7\2\9\v\s\x\j\q\5\u\8\q\e\0\j\2\c\q\u\a\r\c\9\2\1\8\2\5\9\l\u\1\b\a\g\t\s\3\3\e\k\p\t\w\n\b\v\e\k\3\3\t\y\l\g\c\1\l\v\i\9\q\v\x\i\v\f\r\f\6\w\h\s\e\u\c\v\y\1\7\k\5\a\3\6\w\g\a\u\u\g\f\n\2\d\u\m\7\w\g\y\q\x\0\t\f\o\3\y\4\9\a\o\q\r\e\u\z\0\r\r\6\z\i\4\3\z\u\n\w\m\8\r\r\x\9\i\1\h\e\s\s\h\3\h\d\k\6\d\4\7\f\p\z\u\a\c\8\z\o\c\j\q\i\v\l\8\2\s\0\2\b\c\r\d\6\9\x\h\8\f\3\1\v\l\z\r\n\y\6\v\5\g\6\y\y\3\e\z\1\r\w\e\w\g\4\0\q\b\l\5\6\5\7\8\g\n\h\q\d\c\1\d\3\y\8\v\2\z\6\w\k\0\5\i\4\u\x\i\3\5\3\b\3\a\z\c\j\p\j\1\6\3\i\2\1\z\e\5\q\3\w\3\p\b\5\e\n\q\f\l\f\h\y\n\m\f\m\p\i\f\d\n\e\1\a\g\q\m\c\6\0\8\c\m\e\j\6\f\z\o\h\m\b\2\a\8\h\o\8\y\d\l\f\r\1\w\a\m\m\t\s\4\w\7\6\2\t\9\d\b\d\5\r\v\u\9\l\s\s\9\n\3\g\4\p\g\a\p\r\0\2\1\s\5\y\p\l\5\d\z\a\g\i\r\0\y\x\z\d\o\z\m\6\q\u\y\e\c\g\t\i\k\f\e\k\o\q\2\x\t\a\5\l\m\1\p\4\p\d\8\3\6\4\y\m\h\d\o\e\o\d\j\s\z\1\u\f\m\d\7\4\n\o\x\j\o\w\4\2\a\e\6\i\h\5\x\l\h\k\g\y\a\8\z\y\5\z\8\f\a\2\d\7\2\v\q\z\i\y\x\y\t\u\c\3\5\9\6\j\n\u\8\p\o\z\x\5\v\x\c\k\a\h\x\9\s\l\w\k\l\f\6\a\3\e\u\x\3\2\4\i\j\h\7\9 ]] 00:06:57.065 00:06:57.065 real 0m5.567s 00:06:57.065 user 0m3.165s 00:06:57.065 sys 0m1.404s 00:06:57.065 08:17:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.065 08:17:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:57.065 ************************************ 00:06:57.065 END TEST dd_flags_misc_forced_aio 00:06:57.065 ************************************ 00:06:57.065 08:17:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:57.065 08:17:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:57.065 08:17:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:57.065 00:06:57.065 real 0m24.517s 00:06:57.065 user 0m12.572s 00:06:57.065 sys 0m8.431s 00:06:57.065 08:17:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.065 08:17:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:57.065 ************************************ 00:06:57.065 END TEST spdk_dd_posix 00:06:57.065 ************************************ 00:06:57.324 08:17:58 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:57.324 08:17:58 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.324 08:17:58 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.324 08:17:58 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:57.324 ************************************ 00:06:57.324 START TEST spdk_dd_malloc 00:06:57.324 ************************************ 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:57.324 * Looking for test storage... 00:06:57.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:57.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.324 --rc genhtml_branch_coverage=1 00:06:57.324 --rc genhtml_function_coverage=1 00:06:57.324 --rc genhtml_legend=1 00:06:57.324 --rc geninfo_all_blocks=1 00:06:57.324 --rc geninfo_unexecuted_blocks=1 00:06:57.324 00:06:57.324 ' 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:57.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.324 --rc genhtml_branch_coverage=1 00:06:57.324 --rc genhtml_function_coverage=1 00:06:57.324 --rc genhtml_legend=1 00:06:57.324 --rc geninfo_all_blocks=1 00:06:57.324 --rc geninfo_unexecuted_blocks=1 00:06:57.324 00:06:57.324 ' 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:57.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.324 --rc genhtml_branch_coverage=1 00:06:57.324 --rc genhtml_function_coverage=1 00:06:57.324 --rc genhtml_legend=1 00:06:57.324 --rc geninfo_all_blocks=1 00:06:57.324 --rc geninfo_unexecuted_blocks=1 00:06:57.324 00:06:57.324 ' 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:57.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.324 --rc genhtml_branch_coverage=1 00:06:57.324 --rc genhtml_function_coverage=1 00:06:57.324 --rc genhtml_legend=1 00:06:57.324 --rc geninfo_all_blocks=1 00:06:57.324 --rc geninfo_unexecuted_blocks=1 00:06:57.324 00:06:57.324 ' 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.324 08:17:58 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:57.324 ************************************ 00:06:57.324 START TEST dd_malloc_copy 00:06:57.324 ************************************ 00:06:57.324 08:17:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:06:57.324 08:17:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:57.324 08:17:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:57.324 08:17:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:57.324 08:17:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:57.324 08:17:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:57.324 08:17:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:57.324 08:17:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:57.324 08:17:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:57.324 08:17:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:57.324 08:17:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:57.583 [2024-10-15 08:17:59.064891] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:06:57.583 [2024-10-15 08:17:59.064996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61006 ] 00:06:57.583 { 00:06:57.583 "subsystems": [ 00:06:57.583 { 00:06:57.583 "subsystem": "bdev", 00:06:57.583 "config": [ 00:06:57.583 { 00:06:57.583 "params": { 00:06:57.583 "block_size": 512, 00:06:57.583 "num_blocks": 1048576, 00:06:57.583 "name": "malloc0" 00:06:57.583 }, 00:06:57.583 "method": "bdev_malloc_create" 00:06:57.583 }, 00:06:57.583 { 00:06:57.583 "params": { 00:06:57.583 "block_size": 512, 00:06:57.583 "num_blocks": 1048576, 00:06:57.583 "name": "malloc1" 00:06:57.583 }, 00:06:57.583 "method": "bdev_malloc_create" 00:06:57.583 }, 00:06:57.583 { 00:06:57.583 "method": "bdev_wait_for_examine" 00:06:57.583 } 00:06:57.583 ] 00:06:57.583 } 00:06:57.584 ] 00:06:57.584 } 00:06:57.584 [2024-10-15 08:17:59.196620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.584 [2024-10-15 08:17:59.275773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.843 [2024-10-15 08:17:59.348967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.219  [2024-10-15T08:18:01.886Z] Copying: 194/512 [MB] (194 MBps) [2024-10-15T08:18:02.534Z] Copying: 389/512 [MB] (195 MBps) [2024-10-15T08:18:03.469Z] Copying: 512/512 [MB] (average 194 MBps) 00:07:01.738 00:07:01.738 08:18:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:01.738 08:18:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:01.738 08:18:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:01.738 08:18:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:01.738 [2024-10-15 08:18:03.330201] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:01.738 [2024-10-15 08:18:03.330310] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61060 ] 00:07:01.738 { 00:07:01.738 "subsystems": [ 00:07:01.738 { 00:07:01.738 "subsystem": "bdev", 00:07:01.738 "config": [ 00:07:01.738 { 00:07:01.738 "params": { 00:07:01.738 "block_size": 512, 00:07:01.738 "num_blocks": 1048576, 00:07:01.738 "name": "malloc0" 00:07:01.738 }, 00:07:01.738 "method": "bdev_malloc_create" 00:07:01.738 }, 00:07:01.738 { 00:07:01.738 "params": { 00:07:01.738 "block_size": 512, 00:07:01.738 "num_blocks": 1048576, 00:07:01.738 "name": "malloc1" 00:07:01.738 }, 00:07:01.738 "method": "bdev_malloc_create" 00:07:01.738 }, 00:07:01.738 { 00:07:01.738 "method": "bdev_wait_for_examine" 00:07:01.738 } 00:07:01.738 ] 00:07:01.738 } 00:07:01.738 ] 00:07:01.738 } 00:07:01.997 [2024-10-15 08:18:03.470750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.997 [2024-10-15 08:18:03.550300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.997 [2024-10-15 08:18:03.629605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.373  [2024-10-15T08:18:06.519Z] Copying: 191/512 [MB] (191 MBps) [2024-10-15T08:18:06.779Z] Copying: 382/512 [MB] (191 MBps) [2024-10-15T08:18:07.716Z] Copying: 512/512 [MB] (average 192 MBps) 00:07:05.985 00:07:05.985 00:07:05.985 real 0m8.615s 00:07:05.985 user 0m7.305s 00:07:05.985 sys 0m1.131s 00:07:05.985 08:18:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.985 08:18:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:05.985 ************************************ 00:07:05.985 END TEST dd_malloc_copy 00:07:05.985 ************************************ 00:07:05.985 00:07:05.985 real 0m8.869s 00:07:05.985 user 0m7.439s 00:07:05.985 sys 0m1.255s 00:07:05.985 08:18:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.985 ************************************ 00:07:05.985 END TEST spdk_dd_malloc 00:07:05.985 ************************************ 00:07:05.985 08:18:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:06.245 08:18:07 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:06.245 08:18:07 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:06.245 08:18:07 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.245 08:18:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:06.245 ************************************ 00:07:06.245 START TEST spdk_dd_bdev_to_bdev 00:07:06.245 ************************************ 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:06.245 * Looking for test storage... 00:07:06.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lcov --version 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:06.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.245 --rc genhtml_branch_coverage=1 00:07:06.245 --rc genhtml_function_coverage=1 00:07:06.245 --rc genhtml_legend=1 00:07:06.245 --rc geninfo_all_blocks=1 00:07:06.245 --rc geninfo_unexecuted_blocks=1 00:07:06.245 00:07:06.245 ' 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:06.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.245 --rc genhtml_branch_coverage=1 00:07:06.245 --rc genhtml_function_coverage=1 00:07:06.245 --rc genhtml_legend=1 00:07:06.245 --rc geninfo_all_blocks=1 00:07:06.245 --rc geninfo_unexecuted_blocks=1 00:07:06.245 00:07:06.245 ' 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:06.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.245 --rc genhtml_branch_coverage=1 00:07:06.245 --rc genhtml_function_coverage=1 00:07:06.245 --rc genhtml_legend=1 00:07:06.245 --rc geninfo_all_blocks=1 00:07:06.245 --rc geninfo_unexecuted_blocks=1 00:07:06.245 00:07:06.245 ' 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:06.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.245 --rc genhtml_branch_coverage=1 00:07:06.245 --rc genhtml_function_coverage=1 00:07:06.245 --rc genhtml_legend=1 00:07:06.245 --rc geninfo_all_blocks=1 00:07:06.245 --rc geninfo_unexecuted_blocks=1 00:07:06.245 00:07:06.245 ' 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.245 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:06.246 ************************************ 00:07:06.246 START TEST dd_inflate_file 00:07:06.246 ************************************ 00:07:06.246 08:18:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:06.503 [2024-10-15 08:18:07.983355] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:06.503 [2024-10-15 08:18:07.983477] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61179 ] 00:07:06.503 [2024-10-15 08:18:08.125323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.503 [2024-10-15 08:18:08.216923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.761 [2024-10-15 08:18:08.292259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.761  [2024-10-15T08:18:08.772Z] Copying: 64/64 [MB] (average 1422 MBps) 00:07:07.041 00:07:07.041 00:07:07.041 real 0m0.723s 00:07:07.041 user 0m0.442s 00:07:07.041 sys 0m0.380s 00:07:07.041 08:18:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.041 08:18:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:07.041 ************************************ 00:07:07.041 END TEST dd_inflate_file 00:07:07.041 ************************************ 00:07:07.041 08:18:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:07.041 08:18:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:07.041 08:18:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:07.041 08:18:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:07.041 08:18:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:07.041 08:18:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:07.041 08:18:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:07.041 08:18:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.041 08:18:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:07.041 ************************************ 00:07:07.041 START TEST dd_copy_to_out_bdev 00:07:07.041 ************************************ 00:07:07.041 08:18:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:07.041 [2024-10-15 08:18:08.763613] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:07.041 [2024-10-15 08:18:08.763733] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61217 ] 00:07:07.041 { 00:07:07.041 "subsystems": [ 00:07:07.041 { 00:07:07.041 "subsystem": "bdev", 00:07:07.042 "config": [ 00:07:07.042 { 00:07:07.042 "params": { 00:07:07.042 "trtype": "pcie", 00:07:07.042 "traddr": "0000:00:10.0", 00:07:07.042 "name": "Nvme0" 00:07:07.042 }, 00:07:07.042 "method": "bdev_nvme_attach_controller" 00:07:07.042 }, 00:07:07.042 { 00:07:07.042 "params": { 00:07:07.042 "trtype": "pcie", 00:07:07.042 "traddr": "0000:00:11.0", 00:07:07.042 "name": "Nvme1" 00:07:07.042 }, 00:07:07.042 "method": "bdev_nvme_attach_controller" 00:07:07.042 }, 00:07:07.042 { 00:07:07.042 "method": "bdev_wait_for_examine" 00:07:07.042 } 00:07:07.042 ] 00:07:07.042 } 00:07:07.042 ] 00:07:07.042 } 00:07:07.302 [2024-10-15 08:18:08.903855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.302 [2024-10-15 08:18:08.984999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.561 [2024-10-15 08:18:09.059401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.939  [2024-10-15T08:18:10.670Z] Copying: 59/64 [MB] (59 MBps) [2024-10-15T08:18:10.670Z] Copying: 64/64 [MB] (average 59 MBps) 00:07:08.939 00:07:08.939 00:07:08.939 real 0m1.920s 00:07:08.939 user 0m1.656s 00:07:08.939 sys 0m1.516s 00:07:08.939 08:18:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.939 ************************************ 00:07:08.939 END TEST dd_copy_to_out_bdev 00:07:08.939 ************************************ 00:07:08.939 08:18:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:09.197 08:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:09.198 08:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:09.198 08:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.198 08:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.198 08:18:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:09.198 ************************************ 00:07:09.198 START TEST dd_offset_magic 00:07:09.198 ************************************ 00:07:09.198 08:18:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:07:09.198 08:18:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:09.198 08:18:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:09.198 08:18:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:09.198 08:18:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:09.198 08:18:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:09.198 08:18:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:09.198 08:18:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:09.198 08:18:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:09.198 [2024-10-15 08:18:10.738952] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:09.198 [2024-10-15 08:18:10.739080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61262 ] 00:07:09.198 { 00:07:09.198 "subsystems": [ 00:07:09.198 { 00:07:09.198 "subsystem": "bdev", 00:07:09.198 "config": [ 00:07:09.198 { 00:07:09.198 "params": { 00:07:09.198 "trtype": "pcie", 00:07:09.198 "traddr": "0000:00:10.0", 00:07:09.198 "name": "Nvme0" 00:07:09.198 }, 00:07:09.198 "method": "bdev_nvme_attach_controller" 00:07:09.198 }, 00:07:09.198 { 00:07:09.198 "params": { 00:07:09.198 "trtype": "pcie", 00:07:09.198 "traddr": "0000:00:11.0", 00:07:09.198 "name": "Nvme1" 00:07:09.198 }, 00:07:09.198 "method": "bdev_nvme_attach_controller" 00:07:09.198 }, 00:07:09.198 { 00:07:09.198 "method": "bdev_wait_for_examine" 00:07:09.198 } 00:07:09.198 ] 00:07:09.198 } 00:07:09.198 ] 00:07:09.198 } 00:07:09.198 [2024-10-15 08:18:10.879001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.457 [2024-10-15 08:18:10.963153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.457 [2024-10-15 08:18:11.043880] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.715  [2024-10-15T08:18:11.705Z] Copying: 65/65 [MB] (average 942 MBps) 00:07:09.974 00:07:09.974 08:18:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:09.974 08:18:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:09.974 08:18:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:09.974 08:18:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:09.974 { 00:07:09.974 "subsystems": [ 00:07:09.974 { 00:07:09.974 "subsystem": "bdev", 00:07:09.974 "config": [ 00:07:09.974 { 00:07:09.974 "params": { 00:07:09.974 "trtype": "pcie", 00:07:09.974 "traddr": "0000:00:10.0", 00:07:09.974 "name": "Nvme0" 00:07:09.974 }, 00:07:09.974 "method": "bdev_nvme_attach_controller" 00:07:09.974 }, 00:07:09.974 { 00:07:09.974 "params": { 00:07:09.974 "trtype": "pcie", 00:07:09.974 "traddr": "0000:00:11.0", 00:07:09.974 "name": "Nvme1" 00:07:09.974 }, 00:07:09.974 "method": "bdev_nvme_attach_controller" 00:07:09.974 }, 00:07:09.974 { 00:07:09.974 "method": "bdev_wait_for_examine" 00:07:09.974 } 00:07:09.974 ] 00:07:09.974 } 00:07:09.974 ] 00:07:09.974 } 00:07:09.974 [2024-10-15 08:18:11.677400] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:09.974 [2024-10-15 08:18:11.677526] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61277 ] 00:07:10.233 [2024-10-15 08:18:11.816797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.233 [2024-10-15 08:18:11.903722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.543 [2024-10-15 08:18:11.983312] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.543  [2024-10-15T08:18:12.553Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:10.822 00:07:10.822 08:18:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:10.822 08:18:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:10.822 08:18:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:10.822 08:18:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:10.822 08:18:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:10.822 08:18:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:10.822 08:18:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:10.822 [2024-10-15 08:18:12.508993] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:10.822 [2024-10-15 08:18:12.509099] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61299 ] 00:07:10.822 { 00:07:10.822 "subsystems": [ 00:07:10.822 { 00:07:10.822 "subsystem": "bdev", 00:07:10.822 "config": [ 00:07:10.822 { 00:07:10.822 "params": { 00:07:10.822 "trtype": "pcie", 00:07:10.822 "traddr": "0000:00:10.0", 00:07:10.822 "name": "Nvme0" 00:07:10.822 }, 00:07:10.822 "method": "bdev_nvme_attach_controller" 00:07:10.822 }, 00:07:10.822 { 00:07:10.822 "params": { 00:07:10.822 "trtype": "pcie", 00:07:10.822 "traddr": "0000:00:11.0", 00:07:10.822 "name": "Nvme1" 00:07:10.822 }, 00:07:10.822 "method": "bdev_nvme_attach_controller" 00:07:10.822 }, 00:07:10.822 { 00:07:10.822 "method": "bdev_wait_for_examine" 00:07:10.822 } 00:07:10.822 ] 00:07:10.822 } 00:07:10.822 ] 00:07:10.822 } 00:07:11.140 [2024-10-15 08:18:12.643435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.140 [2024-10-15 08:18:12.725207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.140 [2024-10-15 08:18:12.801987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.400  [2024-10-15T08:18:13.390Z] Copying: 65/65 [MB] (average 1083 MBps) 00:07:11.659 00:07:11.659 08:18:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:11.659 08:18:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:11.659 08:18:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:11.659 08:18:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:11.918 [2024-10-15 08:18:13.422569] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:11.918 [2024-10-15 08:18:13.422670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61319 ] 00:07:11.918 { 00:07:11.918 "subsystems": [ 00:07:11.918 { 00:07:11.918 "subsystem": "bdev", 00:07:11.918 "config": [ 00:07:11.919 { 00:07:11.919 "params": { 00:07:11.919 "trtype": "pcie", 00:07:11.919 "traddr": "0000:00:10.0", 00:07:11.919 "name": "Nvme0" 00:07:11.919 }, 00:07:11.919 "method": "bdev_nvme_attach_controller" 00:07:11.919 }, 00:07:11.919 { 00:07:11.919 "params": { 00:07:11.919 "trtype": "pcie", 00:07:11.919 "traddr": "0000:00:11.0", 00:07:11.919 "name": "Nvme1" 00:07:11.919 }, 00:07:11.919 "method": "bdev_nvme_attach_controller" 00:07:11.919 }, 00:07:11.919 { 00:07:11.919 "method": "bdev_wait_for_examine" 00:07:11.919 } 00:07:11.919 ] 00:07:11.919 } 00:07:11.919 ] 00:07:11.919 } 00:07:11.919 [2024-10-15 08:18:13.558643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.919 [2024-10-15 08:18:13.641370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.178 [2024-10-15 08:18:13.716040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.437  [2024-10-15T08:18:14.427Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:12.696 00:07:12.696 08:18:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:12.696 08:18:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:12.696 00:07:12.696 real 0m3.505s 00:07:12.696 user 0m2.499s 00:07:12.696 sys 0m1.206s 00:07:12.696 08:18:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.696 08:18:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:12.696 ************************************ 00:07:12.696 END TEST dd_offset_magic 00:07:12.696 ************************************ 00:07:12.696 08:18:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:12.696 08:18:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:12.696 08:18:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:12.696 08:18:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:12.696 08:18:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:12.696 08:18:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:12.696 08:18:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:12.696 08:18:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:12.696 08:18:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:12.696 08:18:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:12.696 08:18:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:12.696 [2024-10-15 08:18:14.285270] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:12.696 [2024-10-15 08:18:14.285397] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61356 ] 00:07:12.696 { 00:07:12.696 "subsystems": [ 00:07:12.696 { 00:07:12.696 "subsystem": "bdev", 00:07:12.696 "config": [ 00:07:12.696 { 00:07:12.696 "params": { 00:07:12.696 "trtype": "pcie", 00:07:12.696 "traddr": "0000:00:10.0", 00:07:12.696 "name": "Nvme0" 00:07:12.696 }, 00:07:12.696 "method": "bdev_nvme_attach_controller" 00:07:12.696 }, 00:07:12.696 { 00:07:12.696 "params": { 00:07:12.696 "trtype": "pcie", 00:07:12.696 "traddr": "0000:00:11.0", 00:07:12.696 "name": "Nvme1" 00:07:12.696 }, 00:07:12.696 "method": "bdev_nvme_attach_controller" 00:07:12.696 }, 00:07:12.696 { 00:07:12.696 "method": "bdev_wait_for_examine" 00:07:12.696 } 00:07:12.696 ] 00:07:12.696 } 00:07:12.696 ] 00:07:12.696 } 00:07:12.696 [2024-10-15 08:18:14.423627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.956 [2024-10-15 08:18:14.505969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.956 [2024-10-15 08:18:14.581754] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.215  [2024-10-15T08:18:15.204Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:13.473 00:07:13.473 08:18:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:13.473 08:18:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:13.473 08:18:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:13.473 08:18:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:13.473 08:18:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:13.473 08:18:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:13.473 08:18:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:13.473 08:18:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:13.473 08:18:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:13.473 08:18:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:13.473 [2024-10-15 08:18:15.121819] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:13.473 [2024-10-15 08:18:15.121976] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61371 ] 00:07:13.473 { 00:07:13.473 "subsystems": [ 00:07:13.473 { 00:07:13.473 "subsystem": "bdev", 00:07:13.473 "config": [ 00:07:13.473 { 00:07:13.473 "params": { 00:07:13.473 "trtype": "pcie", 00:07:13.473 "traddr": "0000:00:10.0", 00:07:13.473 "name": "Nvme0" 00:07:13.473 }, 00:07:13.473 "method": "bdev_nvme_attach_controller" 00:07:13.473 }, 00:07:13.473 { 00:07:13.473 "params": { 00:07:13.473 "trtype": "pcie", 00:07:13.473 "traddr": "0000:00:11.0", 00:07:13.473 "name": "Nvme1" 00:07:13.473 }, 00:07:13.473 "method": "bdev_nvme_attach_controller" 00:07:13.473 }, 00:07:13.473 { 00:07:13.473 "method": "bdev_wait_for_examine" 00:07:13.473 } 00:07:13.473 ] 00:07:13.474 } 00:07:13.474 ] 00:07:13.474 } 00:07:13.732 [2024-10-15 08:18:15.256192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.732 [2024-10-15 08:18:15.332541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.732 [2024-10-15 08:18:15.408227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.991  [2024-10-15T08:18:15.980Z] Copying: 5120/5120 [kB] (average 714 MBps) 00:07:14.249 00:07:14.249 08:18:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:14.249 ************************************ 00:07:14.249 END TEST spdk_dd_bdev_to_bdev 00:07:14.249 ************************************ 00:07:14.249 00:07:14.249 real 0m8.168s 00:07:14.249 user 0m5.935s 00:07:14.249 sys 0m3.979s 00:07:14.249 08:18:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.249 08:18:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:14.249 08:18:15 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:14.249 08:18:15 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:14.249 08:18:15 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.249 08:18:15 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.249 08:18:15 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:14.249 ************************************ 00:07:14.249 START TEST spdk_dd_uring 00:07:14.249 ************************************ 00:07:14.249 08:18:15 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:14.509 * Looking for test storage... 00:07:14.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lcov --version 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:14.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.509 --rc genhtml_branch_coverage=1 00:07:14.509 --rc genhtml_function_coverage=1 00:07:14.509 --rc genhtml_legend=1 00:07:14.509 --rc geninfo_all_blocks=1 00:07:14.509 --rc geninfo_unexecuted_blocks=1 00:07:14.509 00:07:14.509 ' 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:14.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.509 --rc genhtml_branch_coverage=1 00:07:14.509 --rc genhtml_function_coverage=1 00:07:14.509 --rc genhtml_legend=1 00:07:14.509 --rc geninfo_all_blocks=1 00:07:14.509 --rc geninfo_unexecuted_blocks=1 00:07:14.509 00:07:14.509 ' 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:14.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.509 --rc genhtml_branch_coverage=1 00:07:14.509 --rc genhtml_function_coverage=1 00:07:14.509 --rc genhtml_legend=1 00:07:14.509 --rc geninfo_all_blocks=1 00:07:14.509 --rc geninfo_unexecuted_blocks=1 00:07:14.509 00:07:14.509 ' 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:14.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.509 --rc genhtml_branch_coverage=1 00:07:14.509 --rc genhtml_function_coverage=1 00:07:14.509 --rc genhtml_legend=1 00:07:14.509 --rc geninfo_all_blocks=1 00:07:14.509 --rc geninfo_unexecuted_blocks=1 00:07:14.509 00:07:14.509 ' 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:14.509 ************************************ 00:07:14.509 START TEST dd_uring_copy 00:07:14.509 ************************************ 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:14.509 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=2xhdldjn8c1l15ijjua4r9hoxeonzduu7a2kzl1lyzuoop9r3ebli02zy27kkvdqp294y1gbhm4f6gg07d8padvbhzi4mepo1990hgeu0rs4d0otrztslnw23nyex0epvkuy6k6tnqbodlputahjesfsyq9k20strpppypfwfka3g6ql539z9mfvasu6euqgwgahudj481vxcneqixq6lzbrxxh216y9f8wr0ybaxb3f9548hyknhk4780apf4tzrm5vywvjmcg2wzn7ciky6n06rd1nyt6qsu1mfh1kf3bp26uhrxx36x1mxoc61g44a81j2y1ocfvo1ddvxhtp9wdsaxa4buzfdnqsoxmrco8adebtxr7t3rhi5ea4u515pnm9m79bk6p4xdiqfwfeb9z9m7gwutw8u4xsjh0g2ey6054vgx3eu5tarhcow6k7b8bnc6sim6eyn0bgo4aht4deb5vm42iqicd071emo525a57ldkrzvy61kn8zwip6vbyg21xi4dfv50kxj1xx8rhexhfqdc0t6o6tq33fo05rqqa680xochble90fimsjimb7ijyjp2fz6ji44yf76thqf3s1kghmjjfheqmamxmap4x27t0u0p5u9vm2b0smyc7i3c7em22nof4zx10hji8lp52iw9p4btsez889wp3i4058oa41f02tkik9avsj88td69ce8ezx3avpainbp9tmvwawmec80vwp34iztswr8cfqrzc9t12u7bwtv0mxlfcx3cjgh76eakrrj83kuspnviuv5n6lywwlqonii27tyzt421uv2agx15s7io8ql22n82do7veucnjhtq9sygn6wf5a0c62syb1tc5vhv319gy5sqh07njlxn54pmt6am6c4q1vii58mwaehpe85csb94517d72o3h15y3ynrh6m9cpvvr3qszluvcpnbs8rn8fe9uzroz3izc02u0pjy9pd9x6vxf98bh90txv1ap4teuazal1vag0gli22qx2 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 2xhdldjn8c1l15ijjua4r9hoxeonzduu7a2kzl1lyzuoop9r3ebli02zy27kkvdqp294y1gbhm4f6gg07d8padvbhzi4mepo1990hgeu0rs4d0otrztslnw23nyex0epvkuy6k6tnqbodlputahjesfsyq9k20strpppypfwfka3g6ql539z9mfvasu6euqgwgahudj481vxcneqixq6lzbrxxh216y9f8wr0ybaxb3f9548hyknhk4780apf4tzrm5vywvjmcg2wzn7ciky6n06rd1nyt6qsu1mfh1kf3bp26uhrxx36x1mxoc61g44a81j2y1ocfvo1ddvxhtp9wdsaxa4buzfdnqsoxmrco8adebtxr7t3rhi5ea4u515pnm9m79bk6p4xdiqfwfeb9z9m7gwutw8u4xsjh0g2ey6054vgx3eu5tarhcow6k7b8bnc6sim6eyn0bgo4aht4deb5vm42iqicd071emo525a57ldkrzvy61kn8zwip6vbyg21xi4dfv50kxj1xx8rhexhfqdc0t6o6tq33fo05rqqa680xochble90fimsjimb7ijyjp2fz6ji44yf76thqf3s1kghmjjfheqmamxmap4x27t0u0p5u9vm2b0smyc7i3c7em22nof4zx10hji8lp52iw9p4btsez889wp3i4058oa41f02tkik9avsj88td69ce8ezx3avpainbp9tmvwawmec80vwp34iztswr8cfqrzc9t12u7bwtv0mxlfcx3cjgh76eakrrj83kuspnviuv5n6lywwlqonii27tyzt421uv2agx15s7io8ql22n82do7veucnjhtq9sygn6wf5a0c62syb1tc5vhv319gy5sqh07njlxn54pmt6am6c4q1vii58mwaehpe85csb94517d72o3h15y3ynrh6m9cpvvr3qszluvcpnbs8rn8fe9uzroz3izc02u0pjy9pd9x6vxf98bh90txv1ap4teuazal1vag0gli22qx2 00:07:14.510 08:18:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:14.769 [2024-10-15 08:18:16.247787] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:14.769 [2024-10-15 08:18:16.247939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61455 ] 00:07:14.769 [2024-10-15 08:18:16.387267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.769 [2024-10-15 08:18:16.469407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.028 [2024-10-15 08:18:16.541241] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.967  [2024-10-15T08:18:18.266Z] Copying: 511/511 [MB] (average 835 MBps) 00:07:16.535 00:07:16.535 08:18:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:16.535 08:18:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:16.535 08:18:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:16.535 08:18:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:16.535 { 00:07:16.535 "subsystems": [ 00:07:16.535 { 00:07:16.535 "subsystem": "bdev", 00:07:16.535 "config": [ 00:07:16.535 { 00:07:16.535 "params": { 00:07:16.535 "block_size": 512, 00:07:16.535 "num_blocks": 1048576, 00:07:16.535 "name": "malloc0" 00:07:16.535 }, 00:07:16.535 "method": "bdev_malloc_create" 00:07:16.535 }, 00:07:16.535 { 00:07:16.535 "params": { 00:07:16.535 "filename": "/dev/zram1", 00:07:16.535 "name": "uring0" 00:07:16.535 }, 00:07:16.535 "method": "bdev_uring_create" 00:07:16.535 }, 00:07:16.535 { 00:07:16.535 "method": "bdev_wait_for_examine" 00:07:16.535 } 00:07:16.535 ] 00:07:16.535 } 00:07:16.535 ] 00:07:16.535 } 00:07:16.535 [2024-10-15 08:18:18.032908] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:16.535 [2024-10-15 08:18:18.033057] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61471 ] 00:07:16.535 [2024-10-15 08:18:18.174649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.535 [2024-10-15 08:18:18.255580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.793 [2024-10-15 08:18:18.334056] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.182  [2024-10-15T08:18:20.849Z] Copying: 226/512 [MB] (226 MBps) [2024-10-15T08:18:21.108Z] Copying: 449/512 [MB] (222 MBps) [2024-10-15T08:18:21.675Z] Copying: 512/512 [MB] (average 220 MBps) 00:07:19.944 00:07:19.944 08:18:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:19.944 08:18:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:19.944 08:18:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:19.944 08:18:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:19.944 { 00:07:19.944 "subsystems": [ 00:07:19.944 { 00:07:19.944 "subsystem": "bdev", 00:07:19.944 "config": [ 00:07:19.944 { 00:07:19.944 "params": { 00:07:19.944 "block_size": 512, 00:07:19.944 "num_blocks": 1048576, 00:07:19.944 "name": "malloc0" 00:07:19.944 }, 00:07:19.944 "method": "bdev_malloc_create" 00:07:19.944 }, 00:07:19.944 { 00:07:19.944 "params": { 00:07:19.944 "filename": "/dev/zram1", 00:07:19.944 "name": "uring0" 00:07:19.944 }, 00:07:19.944 "method": "bdev_uring_create" 00:07:19.944 }, 00:07:19.944 { 00:07:19.944 "method": "bdev_wait_for_examine" 00:07:19.944 } 00:07:19.944 ] 00:07:19.944 } 00:07:19.944 ] 00:07:19.944 } 00:07:19.944 [2024-10-15 08:18:21.525387] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:19.944 [2024-10-15 08:18:21.525561] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61526 ] 00:07:19.944 [2024-10-15 08:18:21.661978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.203 [2024-10-15 08:18:21.742325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.203 [2024-10-15 08:18:21.815783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.579  [2024-10-15T08:18:24.303Z] Copying: 185/512 [MB] (185 MBps) [2024-10-15T08:18:25.238Z] Copying: 355/512 [MB] (169 MBps) [2024-10-15T08:18:25.497Z] Copying: 512/512 [MB] (average 180 MBps) 00:07:23.766 00:07:23.766 08:18:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:23.766 08:18:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 2xhdldjn8c1l15ijjua4r9hoxeonzduu7a2kzl1lyzuoop9r3ebli02zy27kkvdqp294y1gbhm4f6gg07d8padvbhzi4mepo1990hgeu0rs4d0otrztslnw23nyex0epvkuy6k6tnqbodlputahjesfsyq9k20strpppypfwfka3g6ql539z9mfvasu6euqgwgahudj481vxcneqixq6lzbrxxh216y9f8wr0ybaxb3f9548hyknhk4780apf4tzrm5vywvjmcg2wzn7ciky6n06rd1nyt6qsu1mfh1kf3bp26uhrxx36x1mxoc61g44a81j2y1ocfvo1ddvxhtp9wdsaxa4buzfdnqsoxmrco8adebtxr7t3rhi5ea4u515pnm9m79bk6p4xdiqfwfeb9z9m7gwutw8u4xsjh0g2ey6054vgx3eu5tarhcow6k7b8bnc6sim6eyn0bgo4aht4deb5vm42iqicd071emo525a57ldkrzvy61kn8zwip6vbyg21xi4dfv50kxj1xx8rhexhfqdc0t6o6tq33fo05rqqa680xochble90fimsjimb7ijyjp2fz6ji44yf76thqf3s1kghmjjfheqmamxmap4x27t0u0p5u9vm2b0smyc7i3c7em22nof4zx10hji8lp52iw9p4btsez889wp3i4058oa41f02tkik9avsj88td69ce8ezx3avpainbp9tmvwawmec80vwp34iztswr8cfqrzc9t12u7bwtv0mxlfcx3cjgh76eakrrj83kuspnviuv5n6lywwlqonii27tyzt421uv2agx15s7io8ql22n82do7veucnjhtq9sygn6wf5a0c62syb1tc5vhv319gy5sqh07njlxn54pmt6am6c4q1vii58mwaehpe85csb94517d72o3h15y3ynrh6m9cpvvr3qszluvcpnbs8rn8fe9uzroz3izc02u0pjy9pd9x6vxf98bh90txv1ap4teuazal1vag0gli22qx2 == \2\x\h\d\l\d\j\n\8\c\1\l\1\5\i\j\j\u\a\4\r\9\h\o\x\e\o\n\z\d\u\u\7\a\2\k\z\l\1\l\y\z\u\o\o\p\9\r\3\e\b\l\i\0\2\z\y\2\7\k\k\v\d\q\p\2\9\4\y\1\g\b\h\m\4\f\6\g\g\0\7\d\8\p\a\d\v\b\h\z\i\4\m\e\p\o\1\9\9\0\h\g\e\u\0\r\s\4\d\0\o\t\r\z\t\s\l\n\w\2\3\n\y\e\x\0\e\p\v\k\u\y\6\k\6\t\n\q\b\o\d\l\p\u\t\a\h\j\e\s\f\s\y\q\9\k\2\0\s\t\r\p\p\p\y\p\f\w\f\k\a\3\g\6\q\l\5\3\9\z\9\m\f\v\a\s\u\6\e\u\q\g\w\g\a\h\u\d\j\4\8\1\v\x\c\n\e\q\i\x\q\6\l\z\b\r\x\x\h\2\1\6\y\9\f\8\w\r\0\y\b\a\x\b\3\f\9\5\4\8\h\y\k\n\h\k\4\7\8\0\a\p\f\4\t\z\r\m\5\v\y\w\v\j\m\c\g\2\w\z\n\7\c\i\k\y\6\n\0\6\r\d\1\n\y\t\6\q\s\u\1\m\f\h\1\k\f\3\b\p\2\6\u\h\r\x\x\3\6\x\1\m\x\o\c\6\1\g\4\4\a\8\1\j\2\y\1\o\c\f\v\o\1\d\d\v\x\h\t\p\9\w\d\s\a\x\a\4\b\u\z\f\d\n\q\s\o\x\m\r\c\o\8\a\d\e\b\t\x\r\7\t\3\r\h\i\5\e\a\4\u\5\1\5\p\n\m\9\m\7\9\b\k\6\p\4\x\d\i\q\f\w\f\e\b\9\z\9\m\7\g\w\u\t\w\8\u\4\x\s\j\h\0\g\2\e\y\6\0\5\4\v\g\x\3\e\u\5\t\a\r\h\c\o\w\6\k\7\b\8\b\n\c\6\s\i\m\6\e\y\n\0\b\g\o\4\a\h\t\4\d\e\b\5\v\m\4\2\i\q\i\c\d\0\7\1\e\m\o\5\2\5\a\5\7\l\d\k\r\z\v\y\6\1\k\n\8\z\w\i\p\6\v\b\y\g\2\1\x\i\4\d\f\v\5\0\k\x\j\1\x\x\8\r\h\e\x\h\f\q\d\c\0\t\6\o\6\t\q\3\3\f\o\0\5\r\q\q\a\6\8\0\x\o\c\h\b\l\e\9\0\f\i\m\s\j\i\m\b\7\i\j\y\j\p\2\f\z\6\j\i\4\4\y\f\7\6\t\h\q\f\3\s\1\k\g\h\m\j\j\f\h\e\q\m\a\m\x\m\a\p\4\x\2\7\t\0\u\0\p\5\u\9\v\m\2\b\0\s\m\y\c\7\i\3\c\7\e\m\2\2\n\o\f\4\z\x\1\0\h\j\i\8\l\p\5\2\i\w\9\p\4\b\t\s\e\z\8\8\9\w\p\3\i\4\0\5\8\o\a\4\1\f\0\2\t\k\i\k\9\a\v\s\j\8\8\t\d\6\9\c\e\8\e\z\x\3\a\v\p\a\i\n\b\p\9\t\m\v\w\a\w\m\e\c\8\0\v\w\p\3\4\i\z\t\s\w\r\8\c\f\q\r\z\c\9\t\1\2\u\7\b\w\t\v\0\m\x\l\f\c\x\3\c\j\g\h\7\6\e\a\k\r\r\j\8\3\k\u\s\p\n\v\i\u\v\5\n\6\l\y\w\w\l\q\o\n\i\i\2\7\t\y\z\t\4\2\1\u\v\2\a\g\x\1\5\s\7\i\o\8\q\l\2\2\n\8\2\d\o\7\v\e\u\c\n\j\h\t\q\9\s\y\g\n\6\w\f\5\a\0\c\6\2\s\y\b\1\t\c\5\v\h\v\3\1\9\g\y\5\s\q\h\0\7\n\j\l\x\n\5\4\p\m\t\6\a\m\6\c\4\q\1\v\i\i\5\8\m\w\a\e\h\p\e\8\5\c\s\b\9\4\5\1\7\d\7\2\o\3\h\1\5\y\3\y\n\r\h\6\m\9\c\p\v\v\r\3\q\s\z\l\u\v\c\p\n\b\s\8\r\n\8\f\e\9\u\z\r\o\z\3\i\z\c\0\2\u\0\p\j\y\9\p\d\9\x\6\v\x\f\9\8\b\h\9\0\t\x\v\1\a\p\4\t\e\u\a\z\a\l\1\v\a\g\0\g\l\i\2\2\q\x\2 ]] 00:07:23.766 08:18:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:23.766 08:18:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 2xhdldjn8c1l15ijjua4r9hoxeonzduu7a2kzl1lyzuoop9r3ebli02zy27kkvdqp294y1gbhm4f6gg07d8padvbhzi4mepo1990hgeu0rs4d0otrztslnw23nyex0epvkuy6k6tnqbodlputahjesfsyq9k20strpppypfwfka3g6ql539z9mfvasu6euqgwgahudj481vxcneqixq6lzbrxxh216y9f8wr0ybaxb3f9548hyknhk4780apf4tzrm5vywvjmcg2wzn7ciky6n06rd1nyt6qsu1mfh1kf3bp26uhrxx36x1mxoc61g44a81j2y1ocfvo1ddvxhtp9wdsaxa4buzfdnqsoxmrco8adebtxr7t3rhi5ea4u515pnm9m79bk6p4xdiqfwfeb9z9m7gwutw8u4xsjh0g2ey6054vgx3eu5tarhcow6k7b8bnc6sim6eyn0bgo4aht4deb5vm42iqicd071emo525a57ldkrzvy61kn8zwip6vbyg21xi4dfv50kxj1xx8rhexhfqdc0t6o6tq33fo05rqqa680xochble90fimsjimb7ijyjp2fz6ji44yf76thqf3s1kghmjjfheqmamxmap4x27t0u0p5u9vm2b0smyc7i3c7em22nof4zx10hji8lp52iw9p4btsez889wp3i4058oa41f02tkik9avsj88td69ce8ezx3avpainbp9tmvwawmec80vwp34iztswr8cfqrzc9t12u7bwtv0mxlfcx3cjgh76eakrrj83kuspnviuv5n6lywwlqonii27tyzt421uv2agx15s7io8ql22n82do7veucnjhtq9sygn6wf5a0c62syb1tc5vhv319gy5sqh07njlxn54pmt6am6c4q1vii58mwaehpe85csb94517d72o3h15y3ynrh6m9cpvvr3qszluvcpnbs8rn8fe9uzroz3izc02u0pjy9pd9x6vxf98bh90txv1ap4teuazal1vag0gli22qx2 == \2\x\h\d\l\d\j\n\8\c\1\l\1\5\i\j\j\u\a\4\r\9\h\o\x\e\o\n\z\d\u\u\7\a\2\k\z\l\1\l\y\z\u\o\o\p\9\r\3\e\b\l\i\0\2\z\y\2\7\k\k\v\d\q\p\2\9\4\y\1\g\b\h\m\4\f\6\g\g\0\7\d\8\p\a\d\v\b\h\z\i\4\m\e\p\o\1\9\9\0\h\g\e\u\0\r\s\4\d\0\o\t\r\z\t\s\l\n\w\2\3\n\y\e\x\0\e\p\v\k\u\y\6\k\6\t\n\q\b\o\d\l\p\u\t\a\h\j\e\s\f\s\y\q\9\k\2\0\s\t\r\p\p\p\y\p\f\w\f\k\a\3\g\6\q\l\5\3\9\z\9\m\f\v\a\s\u\6\e\u\q\g\w\g\a\h\u\d\j\4\8\1\v\x\c\n\e\q\i\x\q\6\l\z\b\r\x\x\h\2\1\6\y\9\f\8\w\r\0\y\b\a\x\b\3\f\9\5\4\8\h\y\k\n\h\k\4\7\8\0\a\p\f\4\t\z\r\m\5\v\y\w\v\j\m\c\g\2\w\z\n\7\c\i\k\y\6\n\0\6\r\d\1\n\y\t\6\q\s\u\1\m\f\h\1\k\f\3\b\p\2\6\u\h\r\x\x\3\6\x\1\m\x\o\c\6\1\g\4\4\a\8\1\j\2\y\1\o\c\f\v\o\1\d\d\v\x\h\t\p\9\w\d\s\a\x\a\4\b\u\z\f\d\n\q\s\o\x\m\r\c\o\8\a\d\e\b\t\x\r\7\t\3\r\h\i\5\e\a\4\u\5\1\5\p\n\m\9\m\7\9\b\k\6\p\4\x\d\i\q\f\w\f\e\b\9\z\9\m\7\g\w\u\t\w\8\u\4\x\s\j\h\0\g\2\e\y\6\0\5\4\v\g\x\3\e\u\5\t\a\r\h\c\o\w\6\k\7\b\8\b\n\c\6\s\i\m\6\e\y\n\0\b\g\o\4\a\h\t\4\d\e\b\5\v\m\4\2\i\q\i\c\d\0\7\1\e\m\o\5\2\5\a\5\7\l\d\k\r\z\v\y\6\1\k\n\8\z\w\i\p\6\v\b\y\g\2\1\x\i\4\d\f\v\5\0\k\x\j\1\x\x\8\r\h\e\x\h\f\q\d\c\0\t\6\o\6\t\q\3\3\f\o\0\5\r\q\q\a\6\8\0\x\o\c\h\b\l\e\9\0\f\i\m\s\j\i\m\b\7\i\j\y\j\p\2\f\z\6\j\i\4\4\y\f\7\6\t\h\q\f\3\s\1\k\g\h\m\j\j\f\h\e\q\m\a\m\x\m\a\p\4\x\2\7\t\0\u\0\p\5\u\9\v\m\2\b\0\s\m\y\c\7\i\3\c\7\e\m\2\2\n\o\f\4\z\x\1\0\h\j\i\8\l\p\5\2\i\w\9\p\4\b\t\s\e\z\8\8\9\w\p\3\i\4\0\5\8\o\a\4\1\f\0\2\t\k\i\k\9\a\v\s\j\8\8\t\d\6\9\c\e\8\e\z\x\3\a\v\p\a\i\n\b\p\9\t\m\v\w\a\w\m\e\c\8\0\v\w\p\3\4\i\z\t\s\w\r\8\c\f\q\r\z\c\9\t\1\2\u\7\b\w\t\v\0\m\x\l\f\c\x\3\c\j\g\h\7\6\e\a\k\r\r\j\8\3\k\u\s\p\n\v\i\u\v\5\n\6\l\y\w\w\l\q\o\n\i\i\2\7\t\y\z\t\4\2\1\u\v\2\a\g\x\1\5\s\7\i\o\8\q\l\2\2\n\8\2\d\o\7\v\e\u\c\n\j\h\t\q\9\s\y\g\n\6\w\f\5\a\0\c\6\2\s\y\b\1\t\c\5\v\h\v\3\1\9\g\y\5\s\q\h\0\7\n\j\l\x\n\5\4\p\m\t\6\a\m\6\c\4\q\1\v\i\i\5\8\m\w\a\e\h\p\e\8\5\c\s\b\9\4\5\1\7\d\7\2\o\3\h\1\5\y\3\y\n\r\h\6\m\9\c\p\v\v\r\3\q\s\z\l\u\v\c\p\n\b\s\8\r\n\8\f\e\9\u\z\r\o\z\3\i\z\c\0\2\u\0\p\j\y\9\p\d\9\x\6\v\x\f\9\8\b\h\9\0\t\x\v\1\a\p\4\t\e\u\a\z\a\l\1\v\a\g\0\g\l\i\2\2\q\x\2 ]] 00:07:23.766 08:18:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:24.333 08:18:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:24.333 08:18:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:24.333 08:18:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:24.333 08:18:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:24.333 { 00:07:24.333 "subsystems": [ 00:07:24.333 { 00:07:24.333 "subsystem": "bdev", 00:07:24.333 "config": [ 00:07:24.333 { 00:07:24.333 "params": { 00:07:24.333 "block_size": 512, 00:07:24.333 "num_blocks": 1048576, 00:07:24.333 "name": "malloc0" 00:07:24.333 }, 00:07:24.333 "method": "bdev_malloc_create" 00:07:24.333 }, 00:07:24.333 { 00:07:24.333 "params": { 00:07:24.333 "filename": "/dev/zram1", 00:07:24.333 "name": "uring0" 00:07:24.333 }, 00:07:24.333 "method": "bdev_uring_create" 00:07:24.333 }, 00:07:24.333 { 00:07:24.333 "method": "bdev_wait_for_examine" 00:07:24.333 } 00:07:24.333 ] 00:07:24.333 } 00:07:24.333 ] 00:07:24.333 } 00:07:24.333 [2024-10-15 08:18:25.946016] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:24.333 [2024-10-15 08:18:25.946248] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61593 ] 00:07:24.592 [2024-10-15 08:18:26.099340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.592 [2024-10-15 08:18:26.177750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.592 [2024-10-15 08:18:26.253033] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.974  [2024-10-15T08:18:28.640Z] Copying: 151/512 [MB] (151 MBps) [2024-10-15T08:18:29.576Z] Copying: 298/512 [MB] (147 MBps) [2024-10-15T08:18:30.143Z] Copying: 443/512 [MB] (144 MBps) [2024-10-15T08:18:30.710Z] Copying: 512/512 [MB] (average 146 MBps) 00:07:28.979 00:07:28.979 08:18:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:28.979 08:18:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:28.979 08:18:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:28.979 08:18:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:28.979 08:18:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:28.979 08:18:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:28.979 08:18:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:28.979 08:18:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:28.979 [2024-10-15 08:18:30.607021] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:28.979 [2024-10-15 08:18:30.607166] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61665 ] 00:07:28.979 { 00:07:28.979 "subsystems": [ 00:07:28.979 { 00:07:28.980 "subsystem": "bdev", 00:07:28.980 "config": [ 00:07:28.980 { 00:07:28.980 "params": { 00:07:28.980 "block_size": 512, 00:07:28.980 "num_blocks": 1048576, 00:07:28.980 "name": "malloc0" 00:07:28.980 }, 00:07:28.980 "method": "bdev_malloc_create" 00:07:28.980 }, 00:07:28.980 { 00:07:28.980 "params": { 00:07:28.980 "filename": "/dev/zram1", 00:07:28.980 "name": "uring0" 00:07:28.980 }, 00:07:28.980 "method": "bdev_uring_create" 00:07:28.980 }, 00:07:28.980 { 00:07:28.980 "params": { 00:07:28.980 "name": "uring0" 00:07:28.980 }, 00:07:28.980 "method": "bdev_uring_delete" 00:07:28.980 }, 00:07:28.980 { 00:07:28.980 "method": "bdev_wait_for_examine" 00:07:28.980 } 00:07:28.980 ] 00:07:28.980 } 00:07:28.980 ] 00:07:28.980 } 00:07:29.239 [2024-10-15 08:18:30.744488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.239 [2024-10-15 08:18:30.825117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.239 [2024-10-15 08:18:30.896489] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.498  [2024-10-15T08:18:31.797Z] Copying: 0/0 [B] (average 0 Bps) 00:07:30.066 00:07:30.066 08:18:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:30.066 08:18:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:07:30.066 08:18:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:30.066 08:18:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.066 08:18:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:30.066 08:18:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:30.066 08:18:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:30.066 08:18:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:30.066 08:18:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.066 08:18:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.066 08:18:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.066 08:18:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.066 08:18:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.066 08:18:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:30.066 08:18:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:30.066 08:18:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:30.066 [2024-10-15 08:18:31.766942] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:30.066 [2024-10-15 08:18:31.767089] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61694 ] 00:07:30.066 { 00:07:30.066 "subsystems": [ 00:07:30.066 { 00:07:30.066 "subsystem": "bdev", 00:07:30.066 "config": [ 00:07:30.066 { 00:07:30.066 "params": { 00:07:30.066 "block_size": 512, 00:07:30.066 "num_blocks": 1048576, 00:07:30.066 "name": "malloc0" 00:07:30.066 }, 00:07:30.066 "method": "bdev_malloc_create" 00:07:30.066 }, 00:07:30.066 { 00:07:30.066 "params": { 00:07:30.066 "filename": "/dev/zram1", 00:07:30.066 "name": "uring0" 00:07:30.066 }, 00:07:30.066 "method": "bdev_uring_create" 00:07:30.066 }, 00:07:30.066 { 00:07:30.066 "params": { 00:07:30.066 "name": "uring0" 00:07:30.066 }, 00:07:30.066 "method": "bdev_uring_delete" 00:07:30.066 }, 00:07:30.066 { 00:07:30.066 "method": "bdev_wait_for_examine" 00:07:30.066 } 00:07:30.066 ] 00:07:30.066 } 00:07:30.066 ] 00:07:30.066 } 00:07:30.325 [2024-10-15 08:18:31.907727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.325 [2024-10-15 08:18:31.994607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.584 [2024-10-15 08:18:32.071293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.843 [2024-10-15 08:18:32.346335] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:30.843 [2024-10-15 08:18:32.346407] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:30.843 [2024-10-15 08:18:32.346419] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:30.843 [2024-10-15 08:18:32.346430] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:31.102 [2024-10-15 08:18:32.800368] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:31.360 08:18:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:07:31.360 08:18:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:31.360 08:18:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:07:31.360 08:18:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:07:31.360 08:18:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:07:31.360 08:18:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:31.360 08:18:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:31.360 08:18:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:31.360 08:18:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:31.360 08:18:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:31.360 08:18:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:31.360 08:18:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:31.619 00:07:31.619 real 0m17.082s 00:07:31.619 user 0m11.542s 00:07:31.619 sys 0m13.811s 00:07:31.619 08:18:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.619 08:18:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:31.619 ************************************ 00:07:31.619 END TEST dd_uring_copy 00:07:31.619 ************************************ 00:07:31.619 00:07:31.619 real 0m17.334s 00:07:31.619 user 0m11.676s 00:07:31.619 sys 0m13.929s 00:07:31.619 08:18:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.619 08:18:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:31.619 ************************************ 00:07:31.619 END TEST spdk_dd_uring 00:07:31.619 ************************************ 00:07:31.619 08:18:33 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:31.619 08:18:33 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:31.619 08:18:33 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.619 08:18:33 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:31.619 ************************************ 00:07:31.619 START TEST spdk_dd_sparse 00:07:31.619 ************************************ 00:07:31.619 08:18:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:31.878 * Looking for test storage... 00:07:31.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:31.878 08:18:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:31.878 08:18:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lcov --version 00:07:31.878 08:18:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:31.878 08:18:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:31.878 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.878 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.878 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.878 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.878 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.878 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.878 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.878 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:31.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.879 --rc genhtml_branch_coverage=1 00:07:31.879 --rc genhtml_function_coverage=1 00:07:31.879 --rc genhtml_legend=1 00:07:31.879 --rc geninfo_all_blocks=1 00:07:31.879 --rc geninfo_unexecuted_blocks=1 00:07:31.879 00:07:31.879 ' 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:31.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.879 --rc genhtml_branch_coverage=1 00:07:31.879 --rc genhtml_function_coverage=1 00:07:31.879 --rc genhtml_legend=1 00:07:31.879 --rc geninfo_all_blocks=1 00:07:31.879 --rc geninfo_unexecuted_blocks=1 00:07:31.879 00:07:31.879 ' 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:31.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.879 --rc genhtml_branch_coverage=1 00:07:31.879 --rc genhtml_function_coverage=1 00:07:31.879 --rc genhtml_legend=1 00:07:31.879 --rc geninfo_all_blocks=1 00:07:31.879 --rc geninfo_unexecuted_blocks=1 00:07:31.879 00:07:31.879 ' 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:31.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.879 --rc genhtml_branch_coverage=1 00:07:31.879 --rc genhtml_function_coverage=1 00:07:31.879 --rc genhtml_legend=1 00:07:31.879 --rc geninfo_all_blocks=1 00:07:31.879 --rc geninfo_unexecuted_blocks=1 00:07:31.879 00:07:31.879 ' 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:31.879 1+0 records in 00:07:31.879 1+0 records out 00:07:31.879 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00812228 s, 516 MB/s 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:31.879 1+0 records in 00:07:31.879 1+0 records out 00:07:31.879 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00832192 s, 504 MB/s 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:31.879 1+0 records in 00:07:31.879 1+0 records out 00:07:31.879 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00643501 s, 652 MB/s 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:31.879 ************************************ 00:07:31.879 START TEST dd_sparse_file_to_file 00:07:31.879 ************************************ 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:31.879 08:18:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:32.138 [2024-10-15 08:18:33.625019] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:32.138 [2024-10-15 08:18:33.625147] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61799 ] 00:07:32.138 { 00:07:32.138 "subsystems": [ 00:07:32.138 { 00:07:32.138 "subsystem": "bdev", 00:07:32.138 "config": [ 00:07:32.138 { 00:07:32.138 "params": { 00:07:32.138 "block_size": 4096, 00:07:32.138 "filename": "dd_sparse_aio_disk", 00:07:32.138 "name": "dd_aio" 00:07:32.138 }, 00:07:32.138 "method": "bdev_aio_create" 00:07:32.138 }, 00:07:32.138 { 00:07:32.138 "params": { 00:07:32.138 "lvs_name": "dd_lvstore", 00:07:32.138 "bdev_name": "dd_aio" 00:07:32.138 }, 00:07:32.138 "method": "bdev_lvol_create_lvstore" 00:07:32.138 }, 00:07:32.138 { 00:07:32.138 "method": "bdev_wait_for_examine" 00:07:32.138 } 00:07:32.138 ] 00:07:32.138 } 00:07:32.138 ] 00:07:32.138 } 00:07:32.138 [2024-10-15 08:18:33.763933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.138 [2024-10-15 08:18:33.852525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.399 [2024-10-15 08:18:33.929229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.399  [2024-10-15T08:18:34.389Z] Copying: 12/36 [MB] (average 800 MBps) 00:07:32.658 00:07:32.658 08:18:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:32.658 08:18:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:32.658 08:18:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:32.658 08:18:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:32.658 08:18:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:32.658 08:18:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:32.658 08:18:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:32.658 08:18:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:32.658 08:18:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:32.658 08:18:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:32.658 00:07:32.658 real 0m0.794s 00:07:32.658 user 0m0.492s 00:07:32.658 sys 0m0.452s 00:07:32.658 08:18:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.658 ************************************ 00:07:32.658 END TEST dd_sparse_file_to_file 00:07:32.658 08:18:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:32.658 ************************************ 00:07:32.917 08:18:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:32.917 08:18:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:32.917 08:18:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.917 08:18:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:32.917 ************************************ 00:07:32.917 START TEST dd_sparse_file_to_bdev 00:07:32.917 ************************************ 00:07:32.917 08:18:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:07:32.917 08:18:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:32.917 08:18:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:32.917 08:18:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:32.917 08:18:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:32.917 08:18:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:32.917 08:18:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:32.917 08:18:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:32.917 08:18:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:32.917 { 00:07:32.917 "subsystems": [ 00:07:32.917 { 00:07:32.917 "subsystem": "bdev", 00:07:32.917 "config": [ 00:07:32.917 { 00:07:32.917 "params": { 00:07:32.917 "block_size": 4096, 00:07:32.917 "filename": "dd_sparse_aio_disk", 00:07:32.917 "name": "dd_aio" 00:07:32.918 }, 00:07:32.918 "method": "bdev_aio_create" 00:07:32.918 }, 00:07:32.918 { 00:07:32.918 "params": { 00:07:32.918 "lvs_name": "dd_lvstore", 00:07:32.918 "lvol_name": "dd_lvol", 00:07:32.918 "size_in_mib": 36, 00:07:32.918 "thin_provision": true 00:07:32.918 }, 00:07:32.918 "method": "bdev_lvol_create" 00:07:32.918 }, 00:07:32.918 { 00:07:32.918 "method": "bdev_wait_for_examine" 00:07:32.918 } 00:07:32.918 ] 00:07:32.918 } 00:07:32.918 ] 00:07:32.918 } 00:07:32.918 [2024-10-15 08:18:34.476281] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:32.918 [2024-10-15 08:18:34.476413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61841 ] 00:07:32.918 [2024-10-15 08:18:34.616267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.176 [2024-10-15 08:18:34.696981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.176 [2024-10-15 08:18:34.777653] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.176  [2024-10-15T08:18:35.474Z] Copying: 12/36 [MB] (average 521 MBps) 00:07:33.743 00:07:33.743 00:07:33.743 real 0m0.773s 00:07:33.743 user 0m0.477s 00:07:33.743 sys 0m0.456s 00:07:33.743 08:18:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.743 08:18:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:33.743 ************************************ 00:07:33.743 END TEST dd_sparse_file_to_bdev 00:07:33.743 ************************************ 00:07:33.743 08:18:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:33.743 08:18:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:33.743 08:18:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.743 08:18:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:33.743 ************************************ 00:07:33.743 START TEST dd_sparse_bdev_to_file 00:07:33.743 ************************************ 00:07:33.743 08:18:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:07:33.743 08:18:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:33.743 08:18:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:33.743 08:18:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:33.743 08:18:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:33.743 08:18:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:33.743 08:18:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:33.743 08:18:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:33.743 08:18:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:33.743 [2024-10-15 08:18:35.287111] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:33.743 [2024-10-15 08:18:35.287231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61879 ] 00:07:33.743 { 00:07:33.743 "subsystems": [ 00:07:33.743 { 00:07:33.743 "subsystem": "bdev", 00:07:33.743 "config": [ 00:07:33.743 { 00:07:33.743 "params": { 00:07:33.743 "block_size": 4096, 00:07:33.743 "filename": "dd_sparse_aio_disk", 00:07:33.743 "name": "dd_aio" 00:07:33.743 }, 00:07:33.743 "method": "bdev_aio_create" 00:07:33.743 }, 00:07:33.743 { 00:07:33.743 "method": "bdev_wait_for_examine" 00:07:33.743 } 00:07:33.743 ] 00:07:33.743 } 00:07:33.743 ] 00:07:33.743 } 00:07:33.743 [2024-10-15 08:18:35.423412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.002 [2024-10-15 08:18:35.507486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.002 [2024-10-15 08:18:35.583468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.002  [2024-10-15T08:18:35.992Z] Copying: 12/36 [MB] (average 1000 MBps) 00:07:34.261 00:07:34.261 08:18:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:34.261 08:18:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:34.261 08:18:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:34.261 08:18:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:34.261 08:18:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:34.261 08:18:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:34.521 08:18:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:34.521 08:18:35 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:34.521 08:18:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:34.521 08:18:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:34.521 00:07:34.521 real 0m0.766s 00:07:34.521 user 0m0.470s 00:07:34.521 sys 0m0.461s 00:07:34.521 08:18:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.521 08:18:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:34.521 ************************************ 00:07:34.521 END TEST dd_sparse_bdev_to_file 00:07:34.521 ************************************ 00:07:34.521 08:18:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:34.521 08:18:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:34.521 08:18:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:34.521 08:18:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:34.521 08:18:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:34.521 00:07:34.521 real 0m2.741s 00:07:34.521 user 0m1.618s 00:07:34.521 sys 0m1.601s 00:07:34.521 08:18:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.521 08:18:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:34.521 ************************************ 00:07:34.521 END TEST spdk_dd_sparse 00:07:34.521 ************************************ 00:07:34.521 08:18:36 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:34.521 08:18:36 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:34.521 08:18:36 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.521 08:18:36 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:34.521 ************************************ 00:07:34.521 START TEST spdk_dd_negative 00:07:34.521 ************************************ 00:07:34.521 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:34.521 * Looking for test storage... 00:07:34.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:34.521 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:34.521 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lcov --version 00:07:34.521 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:34.781 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:34.781 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.781 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.781 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.781 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.781 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.781 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.781 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.781 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.781 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:34.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.782 --rc genhtml_branch_coverage=1 00:07:34.782 --rc genhtml_function_coverage=1 00:07:34.782 --rc genhtml_legend=1 00:07:34.782 --rc geninfo_all_blocks=1 00:07:34.782 --rc geninfo_unexecuted_blocks=1 00:07:34.782 00:07:34.782 ' 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:34.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.782 --rc genhtml_branch_coverage=1 00:07:34.782 --rc genhtml_function_coverage=1 00:07:34.782 --rc genhtml_legend=1 00:07:34.782 --rc geninfo_all_blocks=1 00:07:34.782 --rc geninfo_unexecuted_blocks=1 00:07:34.782 00:07:34.782 ' 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:34.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.782 --rc genhtml_branch_coverage=1 00:07:34.782 --rc genhtml_function_coverage=1 00:07:34.782 --rc genhtml_legend=1 00:07:34.782 --rc geninfo_all_blocks=1 00:07:34.782 --rc geninfo_unexecuted_blocks=1 00:07:34.782 00:07:34.782 ' 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:34.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.782 --rc genhtml_branch_coverage=1 00:07:34.782 --rc genhtml_function_coverage=1 00:07:34.782 --rc genhtml_legend=1 00:07:34.782 --rc geninfo_all_blocks=1 00:07:34.782 --rc geninfo_unexecuted_blocks=1 00:07:34.782 00:07:34.782 ' 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:34.782 ************************************ 00:07:34.782 START TEST dd_invalid_arguments 00:07:34.782 ************************************ 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:34.782 08:18:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:34.782 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:34.782 00:07:34.782 CPU options: 00:07:34.782 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:34.782 (like [0,1,10]) 00:07:34.782 --lcores lcore to CPU mapping list. The list is in the format: 00:07:34.782 [<,lcores[@CPUs]>...] 00:07:34.782 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:34.782 Within the group, '-' is used for range separator, 00:07:34.782 ',' is used for single number separator. 00:07:34.782 '( )' can be omitted for single element group, 00:07:34.782 '@' can be omitted if cpus and lcores have the same value 00:07:34.782 --disable-cpumask-locks Disable CPU core lock files. 00:07:34.782 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:34.782 pollers in the app support interrupt mode) 00:07:34.782 -p, --main-core main (primary) core for DPDK 00:07:34.782 00:07:34.782 Configuration options: 00:07:34.782 -c, --config, --json JSON config file 00:07:34.782 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:34.782 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:34.782 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:34.782 --rpcs-allowed comma-separated list of permitted RPCS 00:07:34.782 --json-ignore-init-errors don't exit on invalid config entry 00:07:34.782 00:07:34.782 Memory options: 00:07:34.782 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:34.782 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:34.782 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:34.782 -R, --huge-unlink unlink huge files after initialization 00:07:34.782 -n, --mem-channels number of memory channels used for DPDK 00:07:34.782 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:34.782 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:34.782 --no-huge run without using hugepages 00:07:34.782 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:07:34.782 -i, --shm-id shared memory ID (optional) 00:07:34.782 -g, --single-file-segments force creating just one hugetlbfs file 00:07:34.782 00:07:34.782 PCI options: 00:07:34.782 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:34.782 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:34.782 -u, --no-pci disable PCI access 00:07:34.782 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:34.782 00:07:34.782 Log options: 00:07:34.782 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:34.783 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:34.783 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:34.783 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:34.783 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:07:34.783 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:07:34.783 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:07:34.783 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:07:34.783 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:07:34.783 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:07:34.783 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:34.783 --silence-noticelog disable notice level logging to stderr 00:07:34.783 00:07:34.783 Trace options: 00:07:34.783 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:34.783 setting 0 to disable trace (default 32768) 00:07:34.783 Tracepoints vary in size and can use more than one trace entry. 00:07:34.783 -e, --tpoint-group [:] 00:07:34.783 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:34.783 [2024-10-15 08:18:36.384848] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:34.783 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:34.783 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:07:34.783 bdev_raid, scheduler, all). 00:07:34.783 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:34.783 a tracepoint group. First tpoint inside a group can be enabled by 00:07:34.783 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:34.783 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:34.783 in /include/spdk_internal/trace_defs.h 00:07:34.783 00:07:34.783 Other options: 00:07:34.783 -h, --help show this usage 00:07:34.783 -v, --version print SPDK version 00:07:34.783 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:34.783 --env-context Opaque context for use of the env implementation 00:07:34.783 00:07:34.783 Application specific: 00:07:34.783 [--------- DD Options ---------] 00:07:34.783 --if Input file. Must specify either --if or --ib. 00:07:34.783 --ib Input bdev. Must specifier either --if or --ib 00:07:34.783 --of Output file. Must specify either --of or --ob. 00:07:34.783 --ob Output bdev. Must specify either --of or --ob. 00:07:34.783 --iflag Input file flags. 00:07:34.783 --oflag Output file flags. 00:07:34.783 --bs I/O unit size (default: 4096) 00:07:34.783 --qd Queue depth (default: 2) 00:07:34.783 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:34.783 --skip Skip this many I/O units at start of input. (default: 0) 00:07:34.783 --seek Skip this many I/O units at start of output. (default: 0) 00:07:34.783 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:34.783 --sparse Enable hole skipping in input target 00:07:34.783 Available iflag and oflag values: 00:07:34.783 append - append mode 00:07:34.783 direct - use direct I/O for data 00:07:34.783 directory - fail unless a directory 00:07:34.783 dsync - use synchronized I/O for data 00:07:34.783 noatime - do not update access time 00:07:34.783 noctty - do not assign controlling terminal from file 00:07:34.783 nofollow - do not follow symlinks 00:07:34.783 nonblock - use non-blocking I/O 00:07:34.783 sync - use synchronized I/O for data and metadata 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:34.783 00:07:34.783 real 0m0.071s 00:07:34.783 user 0m0.041s 00:07:34.783 sys 0m0.029s 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:34.783 ************************************ 00:07:34.783 END TEST dd_invalid_arguments 00:07:34.783 ************************************ 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:34.783 ************************************ 00:07:34.783 START TEST dd_double_input 00:07:34.783 ************************************ 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:34.783 08:18:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:34.783 [2024-10-15 08:18:36.509979] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:35.042 00:07:35.042 real 0m0.073s 00:07:35.042 user 0m0.041s 00:07:35.042 sys 0m0.032s 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.042 ************************************ 00:07:35.042 END TEST dd_double_input 00:07:35.042 ************************************ 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:35.042 ************************************ 00:07:35.042 START TEST dd_double_output 00:07:35.042 ************************************ 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:35.042 [2024-10-15 08:18:36.637039] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:35.042 00:07:35.042 real 0m0.073s 00:07:35.042 user 0m0.048s 00:07:35.042 sys 0m0.023s 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:35.042 ************************************ 00:07:35.042 END TEST dd_double_output 00:07:35.042 ************************************ 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:35.042 ************************************ 00:07:35.042 START TEST dd_no_input 00:07:35.042 ************************************ 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.042 08:18:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.043 08:18:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.043 08:18:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.043 08:18:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.043 08:18:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:35.043 08:18:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:35.301 [2024-10-15 08:18:36.774179] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:35.301 08:18:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:07:35.301 08:18:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:35.301 08:18:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:35.301 08:18:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:35.301 00:07:35.301 real 0m0.083s 00:07:35.301 user 0m0.051s 00:07:35.301 sys 0m0.030s 00:07:35.301 08:18:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.301 08:18:36 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:35.301 ************************************ 00:07:35.301 END TEST dd_no_input 00:07:35.301 ************************************ 00:07:35.301 08:18:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:07:35.301 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.301 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.301 08:18:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:35.301 ************************************ 00:07:35.301 START TEST dd_no_output 00:07:35.301 ************************************ 00:07:35.301 08:18:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:07:35.301 08:18:36 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:35.301 08:18:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:07:35.301 08:18:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:35.301 08:18:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.301 08:18:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.301 08:18:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.301 08:18:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.301 08:18:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.301 08:18:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.302 08:18:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.302 08:18:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:35.302 08:18:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:35.302 [2024-10-15 08:18:36.931806] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:35.302 08:18:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:07:35.302 08:18:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:35.302 08:18:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:35.302 08:18:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:35.302 00:07:35.302 real 0m0.107s 00:07:35.302 user 0m0.063s 00:07:35.302 sys 0m0.042s 00:07:35.302 08:18:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.302 08:18:36 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:35.302 ************************************ 00:07:35.302 END TEST dd_no_output 00:07:35.302 ************************************ 00:07:35.302 08:18:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:35.302 08:18:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.302 08:18:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.302 08:18:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:35.302 ************************************ 00:07:35.302 START TEST dd_wrong_blocksize 00:07:35.302 ************************************ 00:07:35.302 08:18:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:07:35.302 08:18:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:35.302 08:18:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:35.302 08:18:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:35.302 08:18:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.302 08:18:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.302 08:18:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.302 08:18:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.302 08:18:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.302 08:18:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.302 08:18:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.302 08:18:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:35.302 08:18:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:35.611 [2024-10-15 08:18:37.079944] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:35.611 00:07:35.611 real 0m0.085s 00:07:35.611 user 0m0.054s 00:07:35.611 sys 0m0.031s 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:35.611 ************************************ 00:07:35.611 END TEST dd_wrong_blocksize 00:07:35.611 ************************************ 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:35.611 ************************************ 00:07:35.611 START TEST dd_smaller_blocksize 00:07:35.611 ************************************ 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:35.611 08:18:37 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:35.611 [2024-10-15 08:18:37.218167] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:35.611 [2024-10-15 08:18:37.218310] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62111 ] 00:07:35.870 [2024-10-15 08:18:37.357958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.870 [2024-10-15 08:18:37.441362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.870 [2024-10-15 08:18:37.518038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.456 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:36.715 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:36.715 [2024-10-15 08:18:38.255312] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:36.715 [2024-10-15 08:18:38.255583] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.715 [2024-10-15 08:18:38.429890] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:36.975 ************************************ 00:07:36.975 END TEST dd_smaller_blocksize 00:07:36.975 ************************************ 00:07:36.975 00:07:36.975 real 0m1.363s 00:07:36.975 user 0m0.485s 00:07:36.975 sys 0m0.765s 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:36.975 ************************************ 00:07:36.975 START TEST dd_invalid_count 00:07:36.975 ************************************ 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:36.975 [2024-10-15 08:18:38.645790] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:36.975 ************************************ 00:07:36.975 END TEST dd_invalid_count 00:07:36.975 ************************************ 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:36.975 00:07:36.975 real 0m0.088s 00:07:36.975 user 0m0.054s 00:07:36.975 sys 0m0.032s 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.975 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:37.234 ************************************ 00:07:37.234 START TEST dd_invalid_oflag 00:07:37.234 ************************************ 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:37.234 [2024-10-15 08:18:38.776959] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:37.234 00:07:37.234 real 0m0.075s 00:07:37.234 user 0m0.043s 00:07:37.234 sys 0m0.031s 00:07:37.234 ************************************ 00:07:37.234 END TEST dd_invalid_oflag 00:07:37.234 ************************************ 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:37.234 ************************************ 00:07:37.234 START TEST dd_invalid_iflag 00:07:37.234 ************************************ 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:07:37.234 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:37.235 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.235 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.235 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.235 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.235 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.235 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.235 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.235 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:37.235 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:37.235 [2024-10-15 08:18:38.904383] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:37.235 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:07:37.235 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:37.235 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:37.235 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:37.235 00:07:37.235 real 0m0.074s 00:07:37.235 user 0m0.041s 00:07:37.235 sys 0m0.031s 00:07:37.235 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.235 08:18:38 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:37.235 ************************************ 00:07:37.235 END TEST dd_invalid_iflag 00:07:37.235 ************************************ 00:07:37.494 08:18:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:07:37.494 08:18:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.494 08:18:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.494 08:18:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:37.494 ************************************ 00:07:37.494 START TEST dd_unknown_flag 00:07:37.494 ************************************ 00:07:37.494 08:18:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:07:37.494 08:18:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:37.494 08:18:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:07:37.494 08:18:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:37.494 08:18:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.494 08:18:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.494 08:18:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.494 08:18:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.494 08:18:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.494 08:18:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.494 08:18:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.494 08:18:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:37.494 08:18:38 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:37.494 [2024-10-15 08:18:39.044609] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:37.494 [2024-10-15 08:18:39.045086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62210 ] 00:07:37.494 [2024-10-15 08:18:39.185497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.753 [2024-10-15 08:18:39.270382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.753 [2024-10-15 08:18:39.347372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.753 [2024-10-15 08:18:39.399073] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:37.753 [2024-10-15 08:18:39.399170] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.753 [2024-10-15 08:18:39.399255] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:37.753 [2024-10-15 08:18:39.399273] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.753 [2024-10-15 08:18:39.399562] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:37.753 [2024-10-15 08:18:39.399583] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.753 [2024-10-15 08:18:39.399664] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:37.753 [2024-10-15 08:18:39.399677] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:38.012 [2024-10-15 08:18:39.566975] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:38.012 00:07:38.012 real 0m0.676s 00:07:38.012 user 0m0.383s 00:07:38.012 sys 0m0.194s 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:38.012 ************************************ 00:07:38.012 END TEST dd_unknown_flag 00:07:38.012 ************************************ 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:38.012 ************************************ 00:07:38.012 START TEST dd_invalid_json 00:07:38.012 ************************************ 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:38.012 08:18:39 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:38.271 [2024-10-15 08:18:39.778148] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:38.271 [2024-10-15 08:18:39.778298] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62244 ] 00:07:38.271 [2024-10-15 08:18:39.913183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.530 [2024-10-15 08:18:40.007639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.530 [2024-10-15 08:18:40.007726] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:38.530 [2024-10-15 08:18:40.007742] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:38.530 [2024-10-15 08:18:40.007752] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.530 [2024-10-15 08:18:40.007792] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:38.530 00:07:38.530 real 0m0.375s 00:07:38.530 user 0m0.197s 00:07:38.530 sys 0m0.075s 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:38.530 ************************************ 00:07:38.530 END TEST dd_invalid_json 00:07:38.530 ************************************ 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:38.530 ************************************ 00:07:38.530 START TEST dd_invalid_seek 00:07:38.530 ************************************ 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1125 -- # invalid_seek 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:38.530 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:38.530 [2024-10-15 08:18:40.205989] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:38.530 [2024-10-15 08:18:40.206155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62268 ] 00:07:38.530 { 00:07:38.530 "subsystems": [ 00:07:38.530 { 00:07:38.530 "subsystem": "bdev", 00:07:38.530 "config": [ 00:07:38.530 { 00:07:38.530 "params": { 00:07:38.530 "block_size": 512, 00:07:38.530 "num_blocks": 512, 00:07:38.530 "name": "malloc0" 00:07:38.530 }, 00:07:38.530 "method": "bdev_malloc_create" 00:07:38.530 }, 00:07:38.530 { 00:07:38.530 "params": { 00:07:38.530 "block_size": 512, 00:07:38.530 "num_blocks": 512, 00:07:38.530 "name": "malloc1" 00:07:38.530 }, 00:07:38.530 "method": "bdev_malloc_create" 00:07:38.530 }, 00:07:38.530 { 00:07:38.530 "method": "bdev_wait_for_examine" 00:07:38.530 } 00:07:38.530 ] 00:07:38.530 } 00:07:38.530 ] 00:07:38.530 } 00:07:38.799 [2024-10-15 08:18:40.346215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.799 [2024-10-15 08:18:40.425032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.799 [2024-10-15 08:18:40.499327] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.087 [2024-10-15 08:18:40.573832] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:07:39.087 [2024-10-15 08:18:40.573931] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:39.087 [2024-10-15 08:18:40.749699] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:39.347 00:07:39.347 real 0m0.690s 00:07:39.347 user 0m0.448s 00:07:39.347 sys 0m0.204s 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:39.347 ************************************ 00:07:39.347 END TEST dd_invalid_seek 00:07:39.347 ************************************ 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:39.347 ************************************ 00:07:39.347 START TEST dd_invalid_skip 00:07:39.347 ************************************ 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1125 -- # invalid_skip 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:39.347 08:18:40 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:39.347 { 00:07:39.347 "subsystems": [ 00:07:39.347 { 00:07:39.347 "subsystem": "bdev", 00:07:39.347 "config": [ 00:07:39.347 { 00:07:39.347 "params": { 00:07:39.347 "block_size": 512, 00:07:39.347 "num_blocks": 512, 00:07:39.347 "name": "malloc0" 00:07:39.347 }, 00:07:39.347 "method": "bdev_malloc_create" 00:07:39.347 }, 00:07:39.347 { 00:07:39.347 "params": { 00:07:39.347 "block_size": 512, 00:07:39.347 "num_blocks": 512, 00:07:39.347 "name": "malloc1" 00:07:39.347 }, 00:07:39.347 "method": "bdev_malloc_create" 00:07:39.347 }, 00:07:39.347 { 00:07:39.347 "method": "bdev_wait_for_examine" 00:07:39.347 } 00:07:39.347 ] 00:07:39.347 } 00:07:39.347 ] 00:07:39.347 } 00:07:39.347 [2024-10-15 08:18:40.947871] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:39.347 [2024-10-15 08:18:40.948002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62307 ] 00:07:39.606 [2024-10-15 08:18:41.087775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.606 [2024-10-15 08:18:41.170791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.606 [2024-10-15 08:18:41.244259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.606 [2024-10-15 08:18:41.327865] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:07:39.606 [2024-10-15 08:18:41.327999] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:39.865 [2024-10-15 08:18:41.502267] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:39.865 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:07:39.865 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:39.865 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:07:39.865 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:07:39.865 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:07:39.865 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:39.865 00:07:39.865 real 0m0.703s 00:07:39.865 user 0m0.448s 00:07:39.865 sys 0m0.212s 00:07:39.865 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.865 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:39.865 ************************************ 00:07:39.865 END TEST dd_invalid_skip 00:07:39.865 ************************************ 00:07:40.123 08:18:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:07:40.123 08:18:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.123 08:18:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.123 08:18:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:40.123 ************************************ 00:07:40.123 START TEST dd_invalid_input_count 00:07:40.123 ************************************ 00:07:40.123 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1125 -- # invalid_input_count 00:07:40.124 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:40.124 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:40.124 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:07:40.124 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:40.124 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:40.124 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:07:40.124 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:40.124 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:07:40.124 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:07:40.124 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:40.124 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.124 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:07:40.124 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:40.124 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.124 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.124 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.124 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.124 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.124 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.124 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:40.124 08:18:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:40.124 { 00:07:40.124 "subsystems": [ 00:07:40.124 { 00:07:40.124 "subsystem": "bdev", 00:07:40.124 "config": [ 00:07:40.124 { 00:07:40.124 "params": { 00:07:40.124 "block_size": 512, 00:07:40.124 "num_blocks": 512, 00:07:40.124 "name": "malloc0" 00:07:40.124 }, 00:07:40.124 "method": "bdev_malloc_create" 00:07:40.124 }, 00:07:40.124 { 00:07:40.124 "params": { 00:07:40.124 "block_size": 512, 00:07:40.124 "num_blocks": 512, 00:07:40.124 "name": "malloc1" 00:07:40.124 }, 00:07:40.124 "method": "bdev_malloc_create" 00:07:40.124 }, 00:07:40.124 { 00:07:40.124 "method": "bdev_wait_for_examine" 00:07:40.124 } 00:07:40.124 ] 00:07:40.124 } 00:07:40.124 ] 00:07:40.124 } 00:07:40.124 [2024-10-15 08:18:41.710588] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:40.124 [2024-10-15 08:18:41.710694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62340 ] 00:07:40.124 [2024-10-15 08:18:41.853265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.382 [2024-10-15 08:18:41.943470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.382 [2024-10-15 08:18:42.022536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.382 [2024-10-15 08:18:42.101538] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:07:40.382 [2024-10-15 08:18:42.101644] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.640 [2024-10-15 08:18:42.277362] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:40.640 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:07:40.640 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.640 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:07:40.640 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:07:40.640 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:07:40.640 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.640 00:07:40.640 real 0m0.723s 00:07:40.640 user 0m0.469s 00:07:40.640 sys 0m0.212s 00:07:40.640 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.640 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:40.640 ************************************ 00:07:40.640 END TEST dd_invalid_input_count 00:07:40.640 ************************************ 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:40.900 ************************************ 00:07:40.900 START TEST dd_invalid_output_count 00:07:40.900 ************************************ 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1125 -- # invalid_output_count 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:40.900 08:18:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:40.900 { 00:07:40.900 "subsystems": [ 00:07:40.900 { 00:07:40.900 "subsystem": "bdev", 00:07:40.900 "config": [ 00:07:40.900 { 00:07:40.900 "params": { 00:07:40.900 "block_size": 512, 00:07:40.900 "num_blocks": 512, 00:07:40.900 "name": "malloc0" 00:07:40.900 }, 00:07:40.900 "method": "bdev_malloc_create" 00:07:40.900 }, 00:07:40.900 { 00:07:40.900 "method": "bdev_wait_for_examine" 00:07:40.900 } 00:07:40.900 ] 00:07:40.900 } 00:07:40.900 ] 00:07:40.900 } 00:07:40.900 [2024-10-15 08:18:42.494013] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:40.900 [2024-10-15 08:18:42.494148] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62374 ] 00:07:41.159 [2024-10-15 08:18:42.633070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.159 [2024-10-15 08:18:42.712916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.159 [2024-10-15 08:18:42.791430] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.159 [2024-10-15 08:18:42.865032] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:07:41.159 [2024-10-15 08:18:42.865138] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.418 [2024-10-15 08:18:43.048959] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:41.418 08:18:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:07:41.418 08:18:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:41.418 08:18:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:07:41.418 08:18:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:07:41.418 08:18:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:07:41.418 08:18:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:41.418 00:07:41.418 real 0m0.724s 00:07:41.418 user 0m0.470s 00:07:41.418 sys 0m0.205s 00:07:41.418 08:18:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.418 08:18:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:41.418 ************************************ 00:07:41.418 END TEST dd_invalid_output_count 00:07:41.418 ************************************ 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:41.677 ************************************ 00:07:41.677 START TEST dd_bs_not_multiple 00:07:41.677 ************************************ 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1125 -- # bs_not_multiple 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:41.677 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:41.677 [2024-10-15 08:18:43.282338] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:41.677 [2024-10-15 08:18:43.282502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62411 ] 00:07:41.677 { 00:07:41.677 "subsystems": [ 00:07:41.677 { 00:07:41.677 "subsystem": "bdev", 00:07:41.677 "config": [ 00:07:41.677 { 00:07:41.677 "params": { 00:07:41.677 "block_size": 512, 00:07:41.677 "num_blocks": 512, 00:07:41.677 "name": "malloc0" 00:07:41.677 }, 00:07:41.678 "method": "bdev_malloc_create" 00:07:41.678 }, 00:07:41.678 { 00:07:41.678 "params": { 00:07:41.678 "block_size": 512, 00:07:41.678 "num_blocks": 512, 00:07:41.678 "name": "malloc1" 00:07:41.678 }, 00:07:41.678 "method": "bdev_malloc_create" 00:07:41.678 }, 00:07:41.678 { 00:07:41.678 "method": "bdev_wait_for_examine" 00:07:41.678 } 00:07:41.678 ] 00:07:41.678 } 00:07:41.678 ] 00:07:41.678 } 00:07:41.937 [2024-10-15 08:18:43.423876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.937 [2024-10-15 08:18:43.507951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.937 [2024-10-15 08:18:43.583836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.937 [2024-10-15 08:18:43.664190] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:07:41.937 [2024-10-15 08:18:43.664310] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:42.195 [2024-10-15 08:18:43.846987] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:42.454 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:07:42.454 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.454 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:07:42.454 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:07:42.454 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:07:42.454 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.454 00:07:42.454 real 0m0.741s 00:07:42.454 user 0m0.526s 00:07:42.454 sys 0m0.202s 00:07:42.454 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.454 08:18:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:42.454 ************************************ 00:07:42.454 END TEST dd_bs_not_multiple 00:07:42.454 ************************************ 00:07:42.454 00:07:42.454 real 0m7.849s 00:07:42.454 user 0m4.260s 00:07:42.454 sys 0m3.020s 00:07:42.454 08:18:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.454 ************************************ 00:07:42.454 END TEST spdk_dd_negative 00:07:42.454 ************************************ 00:07:42.454 08:18:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:42.454 00:07:42.454 real 1m30.822s 00:07:42.454 user 0m58.403s 00:07:42.455 sys 0m41.273s 00:07:42.455 08:18:44 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.455 08:18:44 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:42.455 ************************************ 00:07:42.455 END TEST spdk_dd 00:07:42.455 ************************************ 00:07:42.455 08:18:44 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:42.455 08:18:44 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:42.455 08:18:44 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:42.455 08:18:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:42.455 08:18:44 -- common/autotest_common.sh@10 -- # set +x 00:07:42.455 08:18:44 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:42.455 08:18:44 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:42.455 08:18:44 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:42.455 08:18:44 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:42.455 08:18:44 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:07:42.455 08:18:44 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:07:42.455 08:18:44 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:42.455 08:18:44 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:42.455 08:18:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.455 08:18:44 -- common/autotest_common.sh@10 -- # set +x 00:07:42.455 ************************************ 00:07:42.455 START TEST nvmf_tcp 00:07:42.455 ************************************ 00:07:42.455 08:18:44 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:42.714 * Looking for test storage... 00:07:42.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:42.714 08:18:44 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:42.714 08:18:44 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:07:42.714 08:18:44 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:42.714 08:18:44 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.714 08:18:44 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:42.714 08:18:44 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.714 08:18:44 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:42.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.714 --rc genhtml_branch_coverage=1 00:07:42.714 --rc genhtml_function_coverage=1 00:07:42.714 --rc genhtml_legend=1 00:07:42.714 --rc geninfo_all_blocks=1 00:07:42.714 --rc geninfo_unexecuted_blocks=1 00:07:42.714 00:07:42.714 ' 00:07:42.714 08:18:44 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:42.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.714 --rc genhtml_branch_coverage=1 00:07:42.714 --rc genhtml_function_coverage=1 00:07:42.714 --rc genhtml_legend=1 00:07:42.714 --rc geninfo_all_blocks=1 00:07:42.714 --rc geninfo_unexecuted_blocks=1 00:07:42.714 00:07:42.714 ' 00:07:42.714 08:18:44 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:42.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.714 --rc genhtml_branch_coverage=1 00:07:42.714 --rc genhtml_function_coverage=1 00:07:42.714 --rc genhtml_legend=1 00:07:42.714 --rc geninfo_all_blocks=1 00:07:42.714 --rc geninfo_unexecuted_blocks=1 00:07:42.714 00:07:42.714 ' 00:07:42.714 08:18:44 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:42.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.714 --rc genhtml_branch_coverage=1 00:07:42.714 --rc genhtml_function_coverage=1 00:07:42.714 --rc genhtml_legend=1 00:07:42.714 --rc geninfo_all_blocks=1 00:07:42.714 --rc geninfo_unexecuted_blocks=1 00:07:42.714 00:07:42.714 ' 00:07:42.714 08:18:44 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:42.714 08:18:44 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:42.714 08:18:44 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:42.714 08:18:44 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:42.714 08:18:44 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.714 08:18:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:42.714 ************************************ 00:07:42.714 START TEST nvmf_target_core 00:07:42.714 ************************************ 00:07:42.714 08:18:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:42.714 * Looking for test storage... 00:07:42.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:42.714 08:18:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:42.714 08:18:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:07:42.714 08:18:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.973 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:42.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.974 --rc genhtml_branch_coverage=1 00:07:42.974 --rc genhtml_function_coverage=1 00:07:42.974 --rc genhtml_legend=1 00:07:42.974 --rc geninfo_all_blocks=1 00:07:42.974 --rc geninfo_unexecuted_blocks=1 00:07:42.974 00:07:42.974 ' 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:42.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.974 --rc genhtml_branch_coverage=1 00:07:42.974 --rc genhtml_function_coverage=1 00:07:42.974 --rc genhtml_legend=1 00:07:42.974 --rc geninfo_all_blocks=1 00:07:42.974 --rc geninfo_unexecuted_blocks=1 00:07:42.974 00:07:42.974 ' 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:42.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.974 --rc genhtml_branch_coverage=1 00:07:42.974 --rc genhtml_function_coverage=1 00:07:42.974 --rc genhtml_legend=1 00:07:42.974 --rc geninfo_all_blocks=1 00:07:42.974 --rc geninfo_unexecuted_blocks=1 00:07:42.974 00:07:42.974 ' 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:42.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.974 --rc genhtml_branch_coverage=1 00:07:42.974 --rc genhtml_function_coverage=1 00:07:42.974 --rc genhtml_legend=1 00:07:42.974 --rc geninfo_all_blocks=1 00:07:42.974 --rc geninfo_unexecuted_blocks=1 00:07:42.974 00:07:42.974 ' 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:42.974 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:42.974 ************************************ 00:07:42.974 START TEST nvmf_host_management 00:07:42.974 ************************************ 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:42.974 * Looking for test storage... 00:07:42.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:07:42.974 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:43.233 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:43.233 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.233 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.233 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.233 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.233 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.233 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.233 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.233 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.233 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.233 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.233 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.233 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:43.233 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:43.233 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.233 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.233 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:43.233 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:43.233 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:43.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.234 --rc genhtml_branch_coverage=1 00:07:43.234 --rc genhtml_function_coverage=1 00:07:43.234 --rc genhtml_legend=1 00:07:43.234 --rc geninfo_all_blocks=1 00:07:43.234 --rc geninfo_unexecuted_blocks=1 00:07:43.234 00:07:43.234 ' 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:43.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.234 --rc genhtml_branch_coverage=1 00:07:43.234 --rc genhtml_function_coverage=1 00:07:43.234 --rc genhtml_legend=1 00:07:43.234 --rc geninfo_all_blocks=1 00:07:43.234 --rc geninfo_unexecuted_blocks=1 00:07:43.234 00:07:43.234 ' 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:43.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.234 --rc genhtml_branch_coverage=1 00:07:43.234 --rc genhtml_function_coverage=1 00:07:43.234 --rc genhtml_legend=1 00:07:43.234 --rc geninfo_all_blocks=1 00:07:43.234 --rc geninfo_unexecuted_blocks=1 00:07:43.234 00:07:43.234 ' 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:43.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.234 --rc genhtml_branch_coverage=1 00:07:43.234 --rc genhtml_function_coverage=1 00:07:43.234 --rc genhtml_legend=1 00:07:43.234 --rc geninfo_all_blocks=1 00:07:43.234 --rc geninfo_unexecuted_blocks=1 00:07:43.234 00:07:43.234 ' 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:43.234 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # nvmf_veth_init 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:43.234 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:43.235 Cannot find device "nvmf_init_br" 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:43.235 Cannot find device "nvmf_init_br2" 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:43.235 Cannot find device "nvmf_tgt_br" 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:43.235 Cannot find device "nvmf_tgt_br2" 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:43.235 Cannot find device "nvmf_init_br" 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:43.235 Cannot find device "nvmf_init_br2" 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:43.235 Cannot find device "nvmf_tgt_br" 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:43.235 Cannot find device "nvmf_tgt_br2" 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:43.235 Cannot find device "nvmf_br" 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:43.235 Cannot find device "nvmf_init_if" 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:43.235 Cannot find device "nvmf_init_if2" 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:43.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:43.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:43.235 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:43.493 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:43.493 08:18:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:43.493 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:43.752 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:43.752 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:07:43.752 00:07:43.752 --- 10.0.0.3 ping statistics --- 00:07:43.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.752 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:43.752 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:43.752 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 00:07:43.752 00:07:43.752 --- 10.0.0.4 ping statistics --- 00:07:43.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.752 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:43.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:43.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:07:43.752 00:07:43.752 --- 10.0.0.1 ping statistics --- 00:07:43.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.752 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:43.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:43.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:07:43.752 00:07:43.752 --- 10.0.0.2 ping statistics --- 00:07:43.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.752 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # return 0 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=62753 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 62753 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 62753 ']' 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.752 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:43.752 [2024-10-15 08:18:45.348501] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:43.752 [2024-10-15 08:18:45.348652] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.010 [2024-10-15 08:18:45.494172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.010 [2024-10-15 08:18:45.588416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.010 [2024-10-15 08:18:45.588504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.010 [2024-10-15 08:18:45.588520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.010 [2024-10-15 08:18:45.588531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.010 [2024-10-15 08:18:45.588540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.010 [2024-10-15 08:18:45.590257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.010 [2024-10-15 08:18:45.590333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.010 [2024-10-15 08:18:45.590486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:44.010 [2024-10-15 08:18:45.590495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.010 [2024-10-15 08:18:45.669300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.269 [2024-10-15 08:18:45.796713] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.269 Malloc0 00:07:44.269 [2024-10-15 08:18:45.886067] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62805 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62805 /var/tmp/bdevperf.sock 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 62805 ']' 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:44.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:44.269 { 00:07:44.269 "params": { 00:07:44.269 "name": "Nvme$subsystem", 00:07:44.269 "trtype": "$TEST_TRANSPORT", 00:07:44.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:44.269 "adrfam": "ipv4", 00:07:44.269 "trsvcid": "$NVMF_PORT", 00:07:44.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:44.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:44.269 "hdgst": ${hdgst:-false}, 00:07:44.269 "ddgst": ${ddgst:-false} 00:07:44.269 }, 00:07:44.269 "method": "bdev_nvme_attach_controller" 00:07:44.269 } 00:07:44.269 EOF 00:07:44.269 )") 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:44.269 08:18:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:44.269 "params": { 00:07:44.269 "name": "Nvme0", 00:07:44.269 "trtype": "tcp", 00:07:44.269 "traddr": "10.0.0.3", 00:07:44.269 "adrfam": "ipv4", 00:07:44.269 "trsvcid": "4420", 00:07:44.269 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:44.269 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:44.270 "hdgst": false, 00:07:44.270 "ddgst": false 00:07:44.270 }, 00:07:44.270 "method": "bdev_nvme_attach_controller" 00:07:44.270 }' 00:07:44.270 [2024-10-15 08:18:45.993243] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:44.270 [2024-10-15 08:18:45.993879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62805 ] 00:07:44.528 [2024-10-15 08:18:46.130841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.528 [2024-10-15 08:18:46.217916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.787 [2024-10-15 08:18:46.302196] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.787 Running I/O for 10 seconds... 00:07:44.787 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.787 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:44.787 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:44.787 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.787 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.046 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.046 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:45.046 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:45.046 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:45.046 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:45.046 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:45.046 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:45.046 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:45.046 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:45.046 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:45.046 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:45.046 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.046 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.046 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.046 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:45.046 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:45.046 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:45.307 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:45.307 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:45.307 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:45.307 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:45.307 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.307 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.307 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.307 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:07:45.307 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:07:45.307 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:45.307 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:45.307 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:45.307 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:45.307 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.307 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.307 [2024-10-15 08:18:46.890900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.890976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.307 [2024-10-15 08:18:46.891444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.307 [2024-10-15 08:18:46.891613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.307 [2024-10-15 08:18:46.891625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:45.308 [2024-10-15 08:18:46.891635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.891647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.891656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.891668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.891696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.891709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.891719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.891731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.891741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.891753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.891763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.891781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.891791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.308 [2024-10-15 08:18:46.891807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.891817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.891829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.891838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.891851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.891860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.891872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.891881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.891894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.891903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.891915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.891925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.308 [2024-10-15 08:18:46.891937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.891946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.891958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.891967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.891979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.891988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.892000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.892010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.892022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.892031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.892043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.892058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.892076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.892085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.892097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.892106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.892128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.892139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.892170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.892182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.892194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.892204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.892216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.892226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.892238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.892248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.892259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.892268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.892280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.892290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.892302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.892311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.892323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.892332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.892345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.892354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.892367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.892376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.892388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.892397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.892409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.892418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.892430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.892444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.892456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.892465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.892477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.308 [2024-10-15 08:18:46.892486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:45.308 [2024-10-15 08:18:46.892497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcf7c0 is same with the state(6) to be set 00:07:45.308 [2024-10-15 08:18:46.892615] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfcf7c0 was disconnected and freed. reset controller. 00:07:45.308 [2024-10-15 08:18:46.893872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:45.308 task offset: 81792 on job bdev=Nvme0n1 fails 00:07:45.308 00:07:45.308 Latency(us) 00:07:45.308 [2024-10-15T08:18:47.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.308 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:45.308 Job: Nvme0n1 ended in about 0.45 seconds with error 00:07:45.308 Verification LBA range: start 0x0 length 0x400 00:07:45.308 Nvme0n1 : 0.45 1281.92 80.12 142.44 0.00 43411.27 7119.59 41228.10 00:07:45.308 [2024-10-15T08:18:47.039Z] =================================================================================================================== 00:07:45.308 [2024-10-15T08:18:47.040Z] Total : 1281.92 80.12 142.44 0.00 43411.27 7119.59 41228.10 00:07:45.309 [2024-10-15 08:18:46.896706] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.309 [2024-10-15 08:18:46.896740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfcfb20 (9): Bad file descriptor 00:07:45.309 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.309 08:18:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:45.309 [2024-10-15 08:18:46.908676] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:46.246 08:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62805 00:07:46.246 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62805) - No such process 00:07:46.246 08:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:46.246 08:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:46.246 08:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:46.246 08:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:46.246 08:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:46.246 08:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:46.246 08:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:46.246 08:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:46.246 { 00:07:46.246 "params": { 00:07:46.246 "name": "Nvme$subsystem", 00:07:46.246 "trtype": "$TEST_TRANSPORT", 00:07:46.246 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:46.246 "adrfam": "ipv4", 00:07:46.246 "trsvcid": "$NVMF_PORT", 00:07:46.246 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:46.246 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:46.246 "hdgst": ${hdgst:-false}, 00:07:46.246 "ddgst": ${ddgst:-false} 00:07:46.246 }, 00:07:46.246 "method": "bdev_nvme_attach_controller" 00:07:46.246 } 00:07:46.246 EOF 00:07:46.246 )") 00:07:46.246 08:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:46.246 08:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:46.246 08:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:46.246 08:18:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:46.246 "params": { 00:07:46.246 "name": "Nvme0", 00:07:46.246 "trtype": "tcp", 00:07:46.246 "traddr": "10.0.0.3", 00:07:46.246 "adrfam": "ipv4", 00:07:46.246 "trsvcid": "4420", 00:07:46.246 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:46.246 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:46.246 "hdgst": false, 00:07:46.246 "ddgst": false 00:07:46.246 }, 00:07:46.246 "method": "bdev_nvme_attach_controller" 00:07:46.246 }' 00:07:46.246 [2024-10-15 08:18:47.968574] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:46.246 [2024-10-15 08:18:47.968689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62845 ] 00:07:46.505 [2024-10-15 08:18:48.113175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.505 [2024-10-15 08:18:48.200585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.763 [2024-10-15 08:18:48.284070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.763 Running I/O for 1 seconds... 00:07:48.141 1408.00 IOPS, 88.00 MiB/s 00:07:48.141 Latency(us) 00:07:48.141 [2024-10-15T08:18:49.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.141 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:48.141 Verification LBA range: start 0x0 length 0x400 00:07:48.141 Nvme0n1 : 1.03 1430.22 89.39 0.00 0.00 43741.08 4706.68 45994.36 00:07:48.141 [2024-10-15T08:18:49.872Z] =================================================================================================================== 00:07:48.141 [2024-10-15T08:18:49.872Z] Total : 1430.22 89.39 0.00 0.00 43741.08 4706.68 45994.36 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:48.141 rmmod nvme_tcp 00:07:48.141 rmmod nvme_fabrics 00:07:48.141 rmmod nvme_keyring 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 62753 ']' 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 62753 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 62753 ']' 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 62753 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62753 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:48.141 killing process with pid 62753 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62753' 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 62753 00:07:48.141 08:18:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 62753 00:07:48.710 [2024-10-15 08:18:50.148580] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.710 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:48.970 00:07:48.970 real 0m5.893s 00:07:48.970 user 0m20.734s 00:07:48.970 sys 0m1.741s 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.970 ************************************ 00:07:48.970 END TEST nvmf_host_management 00:07:48.970 ************************************ 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:48.970 ************************************ 00:07:48.970 START TEST nvmf_lvol 00:07:48.970 ************************************ 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:48.970 * Looking for test storage... 00:07:48.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:48.970 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:49.230 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.230 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:49.230 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.230 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:49.230 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:49.230 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.230 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:49.230 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.230 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.230 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.230 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:49.230 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.230 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:49.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.230 --rc genhtml_branch_coverage=1 00:07:49.230 --rc genhtml_function_coverage=1 00:07:49.230 --rc genhtml_legend=1 00:07:49.230 --rc geninfo_all_blocks=1 00:07:49.230 --rc geninfo_unexecuted_blocks=1 00:07:49.230 00:07:49.230 ' 00:07:49.230 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:49.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.230 --rc genhtml_branch_coverage=1 00:07:49.230 --rc genhtml_function_coverage=1 00:07:49.230 --rc genhtml_legend=1 00:07:49.230 --rc geninfo_all_blocks=1 00:07:49.230 --rc geninfo_unexecuted_blocks=1 00:07:49.230 00:07:49.230 ' 00:07:49.230 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:49.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.230 --rc genhtml_branch_coverage=1 00:07:49.230 --rc genhtml_function_coverage=1 00:07:49.230 --rc genhtml_legend=1 00:07:49.230 --rc geninfo_all_blocks=1 00:07:49.230 --rc geninfo_unexecuted_blocks=1 00:07:49.230 00:07:49.230 ' 00:07:49.230 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:49.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.230 --rc genhtml_branch_coverage=1 00:07:49.230 --rc genhtml_function_coverage=1 00:07:49.230 --rc genhtml_legend=1 00:07:49.230 --rc geninfo_all_blocks=1 00:07:49.230 --rc geninfo_unexecuted_blocks=1 00:07:49.231 00:07:49.231 ' 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:49.231 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # nvmf_veth_init 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:49.231 Cannot find device "nvmf_init_br" 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:49.231 Cannot find device "nvmf_init_br2" 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:49.231 Cannot find device "nvmf_tgt_br" 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:49.231 Cannot find device "nvmf_tgt_br2" 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:49.231 Cannot find device "nvmf_init_br" 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:49.231 Cannot find device "nvmf_init_br2" 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:49.231 Cannot find device "nvmf_tgt_br" 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:49.231 Cannot find device "nvmf_tgt_br2" 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:49.231 Cannot find device "nvmf_br" 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:49.231 Cannot find device "nvmf_init_if" 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:49.231 Cannot find device "nvmf_init_if2" 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:49.231 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:49.232 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:49.232 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:49.232 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:49.232 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:49.232 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:49.232 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:49.232 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:49.232 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:49.232 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:49.232 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:49.232 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:49.232 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:49.492 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:49.492 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:49.492 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:49.492 08:18:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:49.492 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:49.492 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:07:49.492 00:07:49.492 --- 10.0.0.3 ping statistics --- 00:07:49.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.492 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:49.492 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:49.492 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:07:49.492 00:07:49.492 --- 10.0.0.4 ping statistics --- 00:07:49.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.492 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:49.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:49.492 00:07:49.492 --- 10.0.0.1 ping statistics --- 00:07:49.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.492 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:49.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:07:49.492 00:07:49.492 --- 10.0.0.2 ping statistics --- 00:07:49.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.492 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # return 0 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:49.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=63117 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 63117 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 63117 ']' 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.492 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:49.751 [2024-10-15 08:18:51.256712] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:07:49.751 [2024-10-15 08:18:51.257209] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.751 [2024-10-15 08:18:51.399396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.010 [2024-10-15 08:18:51.487893] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.010 [2024-10-15 08:18:51.488266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.010 [2024-10-15 08:18:51.488417] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:50.010 [2024-10-15 08:18:51.488570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:50.010 [2024-10-15 08:18:51.488605] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.010 [2024-10-15 08:18:51.490195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.010 [2024-10-15 08:18:51.490358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.010 [2024-10-15 08:18:51.490362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.010 [2024-10-15 08:18:51.565390] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.010 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.010 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:50.011 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:50.011 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:50.011 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:50.011 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.011 08:18:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:50.270 [2024-10-15 08:18:51.996938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.528 08:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:50.788 08:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:50.788 08:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:51.046 08:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:51.046 08:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:51.305 08:18:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:51.872 08:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e54ba011-8c31-4116-be42-94b441d8dd73 00:07:51.872 08:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e54ba011-8c31-4116-be42-94b441d8dd73 lvol 20 00:07:52.131 08:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a301f50f-3c5b-473f-97c7-4abf44d05b8c 00:07:52.131 08:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:52.390 08:18:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a301f50f-3c5b-473f-97c7-4abf44d05b8c 00:07:52.649 08:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:52.908 [2024-10-15 08:18:54.434334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:52.908 08:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:53.167 08:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=63185 00:07:53.167 08:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:53.167 08:18:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:54.103 08:18:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot a301f50f-3c5b-473f-97c7-4abf44d05b8c MY_SNAPSHOT 00:07:54.671 08:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=515925ce-2ffb-44a4-8750-4bb76c24853c 00:07:54.671 08:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize a301f50f-3c5b-473f-97c7-4abf44d05b8c 30 00:07:54.929 08:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 515925ce-2ffb-44a4-8750-4bb76c24853c MY_CLONE 00:07:55.188 08:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=17e9875f-7103-4bc5-adc1-4526889fbf3b 00:07:55.188 08:18:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 17e9875f-7103-4bc5-adc1-4526889fbf3b 00:07:55.756 08:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 63185 00:08:03.873 Initializing NVMe Controllers 00:08:03.873 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:03.873 Controller IO queue size 128, less than required. 00:08:03.873 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:03.873 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:03.873 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:03.873 Initialization complete. Launching workers. 00:08:03.873 ======================================================== 00:08:03.873 Latency(us) 00:08:03.873 Device Information : IOPS MiB/s Average min max 00:08:03.873 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10027.40 39.17 12775.39 2764.19 71877.09 00:08:03.873 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10082.20 39.38 12706.66 4091.86 98273.85 00:08:03.873 ======================================================== 00:08:03.873 Total : 20109.60 78.55 12740.93 2764.19 98273.85 00:08:03.873 00:08:03.873 08:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:03.873 08:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a301f50f-3c5b-473f-97c7-4abf44d05b8c 00:08:04.132 08:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e54ba011-8c31-4116-be42-94b441d8dd73 00:08:04.391 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:04.391 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:04.391 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:04.391 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:04.391 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:04.650 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:04.650 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:04.650 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:04.650 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:04.650 rmmod nvme_tcp 00:08:04.650 rmmod nvme_fabrics 00:08:04.650 rmmod nvme_keyring 00:08:04.650 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:04.650 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:04.650 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:04.650 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 63117 ']' 00:08:04.650 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 63117 00:08:04.650 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 63117 ']' 00:08:04.650 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 63117 00:08:04.650 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:04.650 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:04.650 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63117 00:08:04.650 killing process with pid 63117 00:08:04.650 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:04.650 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:04.650 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63117' 00:08:04.650 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 63117 00:08:04.650 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 63117 00:08:04.909 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:04.909 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:04.909 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:04.909 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:04.909 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:08:04.909 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:08:04.909 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:04.909 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:04.909 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:04.909 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:04.909 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:04.909 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:05.168 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:05.168 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:05.168 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:05.168 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:05.168 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:05.168 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:05.168 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:05.168 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:05.168 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:05.168 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:05.168 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:05.168 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.168 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.168 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.168 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:05.168 00:08:05.168 real 0m16.315s 00:08:05.168 user 1m6.386s 00:08:05.168 sys 0m4.677s 00:08:05.168 ************************************ 00:08:05.168 END TEST nvmf_lvol 00:08:05.168 ************************************ 00:08:05.168 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.168 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:05.168 08:19:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:05.168 08:19:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:05.168 08:19:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.168 08:19:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:05.168 ************************************ 00:08:05.168 START TEST nvmf_lvs_grow 00:08:05.168 ************************************ 00:08:05.168 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:05.428 * Looking for test storage... 00:08:05.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:05.428 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:05.428 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:05.428 08:19:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:05.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.428 --rc genhtml_branch_coverage=1 00:08:05.428 --rc genhtml_function_coverage=1 00:08:05.428 --rc genhtml_legend=1 00:08:05.428 --rc geninfo_all_blocks=1 00:08:05.428 --rc geninfo_unexecuted_blocks=1 00:08:05.428 00:08:05.428 ' 00:08:05.428 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:05.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.428 --rc genhtml_branch_coverage=1 00:08:05.428 --rc genhtml_function_coverage=1 00:08:05.428 --rc genhtml_legend=1 00:08:05.428 --rc geninfo_all_blocks=1 00:08:05.428 --rc geninfo_unexecuted_blocks=1 00:08:05.428 00:08:05.428 ' 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:05.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.429 --rc genhtml_branch_coverage=1 00:08:05.429 --rc genhtml_function_coverage=1 00:08:05.429 --rc genhtml_legend=1 00:08:05.429 --rc geninfo_all_blocks=1 00:08:05.429 --rc geninfo_unexecuted_blocks=1 00:08:05.429 00:08:05.429 ' 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:05.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.429 --rc genhtml_branch_coverage=1 00:08:05.429 --rc genhtml_function_coverage=1 00:08:05.429 --rc genhtml_legend=1 00:08:05.429 --rc geninfo_all_blocks=1 00:08:05.429 --rc geninfo_unexecuted_blocks=1 00:08:05.429 00:08:05.429 ' 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:05.429 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # nvmf_veth_init 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:05.429 Cannot find device "nvmf_init_br" 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:05.429 Cannot find device "nvmf_init_br2" 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:05.429 Cannot find device "nvmf_tgt_br" 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:05.429 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:05.689 Cannot find device "nvmf_tgt_br2" 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:05.689 Cannot find device "nvmf_init_br" 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:05.689 Cannot find device "nvmf_init_br2" 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:05.689 Cannot find device "nvmf_tgt_br" 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:05.689 Cannot find device "nvmf_tgt_br2" 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:05.689 Cannot find device "nvmf_br" 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:05.689 Cannot find device "nvmf_init_if" 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:05.689 Cannot find device "nvmf_init_if2" 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:05.689 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:05.689 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:05.689 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:05.985 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:05.985 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:05.985 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:05.985 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:05.985 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:05.985 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:05.985 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:05.985 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:05.985 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:05.985 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:05.985 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:08:05.985 00:08:05.985 --- 10.0.0.3 ping statistics --- 00:08:05.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.985 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:05.985 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:05.985 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:05.985 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:08:05.985 00:08:05.985 --- 10.0.0.4 ping statistics --- 00:08:05.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.985 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:08:05.985 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:05.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:05.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:05.985 00:08:05.985 --- 10.0.0.1 ping statistics --- 00:08:05.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.985 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:05.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:05.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:08:05.986 00:08:05.986 --- 10.0.0.2 ping statistics --- 00:08:05.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.986 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # return 0 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=63579 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 63579 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 63579 ']' 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.986 08:19:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:05.986 [2024-10-15 08:19:07.570275] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:08:05.986 [2024-10-15 08:19:07.570397] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.279 [2024-10-15 08:19:07.708232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.279 [2024-10-15 08:19:07.798008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.279 [2024-10-15 08:19:07.798101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.279 [2024-10-15 08:19:07.798129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.279 [2024-10-15 08:19:07.798142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.279 [2024-10-15 08:19:07.798151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.279 [2024-10-15 08:19:07.798726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.279 [2024-10-15 08:19:07.876401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.214 08:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:07.214 08:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:07.214 08:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:07.214 08:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:07.214 08:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:07.214 08:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.214 08:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:07.214 [2024-10-15 08:19:08.883223] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.214 08:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:07.214 08:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:07.214 08:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.214 08:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:07.214 ************************************ 00:08:07.214 START TEST lvs_grow_clean 00:08:07.214 ************************************ 00:08:07.214 08:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:07.214 08:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:07.214 08:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:07.214 08:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:07.214 08:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:07.214 08:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:07.214 08:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:07.214 08:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:07.214 08:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:07.214 08:19:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:07.780 08:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:07.780 08:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:08.037 08:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f93716e6-844e-4d1e-bbb9-6912864c0ae8 00:08:08.037 08:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f93716e6-844e-4d1e-bbb9-6912864c0ae8 00:08:08.037 08:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:08.296 08:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:08.296 08:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:08.296 08:19:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f93716e6-844e-4d1e-bbb9-6912864c0ae8 lvol 150 00:08:08.610 08:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=34251f51-73cf-4a69-8d1f-dc6397540a16 00:08:08.610 08:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:08.610 08:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:08.869 [2024-10-15 08:19:10.496060] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:08.869 [2024-10-15 08:19:10.496173] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:08.869 true 00:08:08.869 08:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f93716e6-844e-4d1e-bbb9-6912864c0ae8 00:08:08.869 08:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:09.127 08:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:09.127 08:19:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:09.386 08:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 34251f51-73cf-4a69-8d1f-dc6397540a16 00:08:09.953 08:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:09.953 [2024-10-15 08:19:11.668857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:10.211 08:19:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:10.470 08:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63667 00:08:10.470 08:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:10.470 08:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:10.470 08:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63667 /var/tmp/bdevperf.sock 00:08:10.470 08:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 63667 ']' 00:08:10.470 08:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:10.470 08:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:10.470 08:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:10.470 08:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.470 08:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:10.470 [2024-10-15 08:19:12.064675] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:08:10.470 [2024-10-15 08:19:12.064790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63667 ] 00:08:10.728 [2024-10-15 08:19:12.203735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.729 [2024-10-15 08:19:12.287081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.729 [2024-10-15 08:19:12.359570] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.729 08:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.729 08:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:10.729 08:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:11.296 Nvme0n1 00:08:11.296 08:19:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:11.554 [ 00:08:11.554 { 00:08:11.554 "name": "Nvme0n1", 00:08:11.554 "aliases": [ 00:08:11.554 "34251f51-73cf-4a69-8d1f-dc6397540a16" 00:08:11.554 ], 00:08:11.554 "product_name": "NVMe disk", 00:08:11.554 "block_size": 4096, 00:08:11.554 "num_blocks": 38912, 00:08:11.554 "uuid": "34251f51-73cf-4a69-8d1f-dc6397540a16", 00:08:11.554 "numa_id": -1, 00:08:11.554 "assigned_rate_limits": { 00:08:11.554 "rw_ios_per_sec": 0, 00:08:11.554 "rw_mbytes_per_sec": 0, 00:08:11.554 "r_mbytes_per_sec": 0, 00:08:11.554 "w_mbytes_per_sec": 0 00:08:11.554 }, 00:08:11.554 "claimed": false, 00:08:11.554 "zoned": false, 00:08:11.554 "supported_io_types": { 00:08:11.554 "read": true, 00:08:11.554 "write": true, 00:08:11.554 "unmap": true, 00:08:11.554 "flush": true, 00:08:11.554 "reset": true, 00:08:11.554 "nvme_admin": true, 00:08:11.554 "nvme_io": true, 00:08:11.554 "nvme_io_md": false, 00:08:11.554 "write_zeroes": true, 00:08:11.554 "zcopy": false, 00:08:11.554 "get_zone_info": false, 00:08:11.554 "zone_management": false, 00:08:11.554 "zone_append": false, 00:08:11.554 "compare": true, 00:08:11.554 "compare_and_write": true, 00:08:11.554 "abort": true, 00:08:11.554 "seek_hole": false, 00:08:11.554 "seek_data": false, 00:08:11.554 "copy": true, 00:08:11.554 "nvme_iov_md": false 00:08:11.554 }, 00:08:11.554 "memory_domains": [ 00:08:11.554 { 00:08:11.554 "dma_device_id": "system", 00:08:11.554 "dma_device_type": 1 00:08:11.554 } 00:08:11.554 ], 00:08:11.554 "driver_specific": { 00:08:11.554 "nvme": [ 00:08:11.554 { 00:08:11.554 "trid": { 00:08:11.554 "trtype": "TCP", 00:08:11.554 "adrfam": "IPv4", 00:08:11.554 "traddr": "10.0.0.3", 00:08:11.554 "trsvcid": "4420", 00:08:11.554 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:11.554 }, 00:08:11.554 "ctrlr_data": { 00:08:11.554 "cntlid": 1, 00:08:11.554 "vendor_id": "0x8086", 00:08:11.554 "model_number": "SPDK bdev Controller", 00:08:11.554 "serial_number": "SPDK0", 00:08:11.554 "firmware_revision": "25.01", 00:08:11.554 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:11.554 "oacs": { 00:08:11.554 "security": 0, 00:08:11.554 "format": 0, 00:08:11.554 "firmware": 0, 00:08:11.554 "ns_manage": 0 00:08:11.554 }, 00:08:11.554 "multi_ctrlr": true, 00:08:11.554 "ana_reporting": false 00:08:11.554 }, 00:08:11.554 "vs": { 00:08:11.554 "nvme_version": "1.3" 00:08:11.554 }, 00:08:11.554 "ns_data": { 00:08:11.554 "id": 1, 00:08:11.554 "can_share": true 00:08:11.554 } 00:08:11.554 } 00:08:11.554 ], 00:08:11.554 "mp_policy": "active_passive" 00:08:11.554 } 00:08:11.554 } 00:08:11.554 ] 00:08:11.554 08:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63683 00:08:11.554 08:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:11.554 08:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:11.554 Running I/O for 10 seconds... 00:08:12.490 Latency(us) 00:08:12.490 [2024-10-15T08:19:14.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.490 Nvme0n1 : 1.00 6899.00 26.95 0.00 0.00 0.00 0.00 0.00 00:08:12.490 [2024-10-15T08:19:14.221Z] =================================================================================================================== 00:08:12.490 [2024-10-15T08:19:14.221Z] Total : 6899.00 26.95 0.00 0.00 0.00 0.00 0.00 00:08:12.490 00:08:13.424 08:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f93716e6-844e-4d1e-bbb9-6912864c0ae8 00:08:13.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.682 Nvme0n1 : 2.00 6751.50 26.37 0.00 0.00 0.00 0.00 0.00 00:08:13.682 [2024-10-15T08:19:15.413Z] =================================================================================================================== 00:08:13.682 [2024-10-15T08:19:15.413Z] Total : 6751.50 26.37 0.00 0.00 0.00 0.00 0.00 00:08:13.682 00:08:13.940 true 00:08:13.940 08:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f93716e6-844e-4d1e-bbb9-6912864c0ae8 00:08:13.940 08:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:14.197 08:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:14.197 08:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:14.197 08:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63683 00:08:14.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.764 Nvme0n1 : 3.00 6617.67 25.85 0.00 0.00 0.00 0.00 0.00 00:08:14.764 [2024-10-15T08:19:16.495Z] =================================================================================================================== 00:08:14.765 [2024-10-15T08:19:16.496Z] Total : 6617.67 25.85 0.00 0.00 0.00 0.00 0.00 00:08:14.765 00:08:15.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.699 Nvme0n1 : 4.00 6582.50 25.71 0.00 0.00 0.00 0.00 0.00 00:08:15.699 [2024-10-15T08:19:17.430Z] =================================================================================================================== 00:08:15.699 [2024-10-15T08:19:17.430Z] Total : 6582.50 25.71 0.00 0.00 0.00 0.00 0.00 00:08:15.699 00:08:16.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.635 Nvme0n1 : 5.00 6521.40 25.47 0.00 0.00 0.00 0.00 0.00 00:08:16.635 [2024-10-15T08:19:18.366Z] =================================================================================================================== 00:08:16.635 [2024-10-15T08:19:18.366Z] Total : 6521.40 25.47 0.00 0.00 0.00 0.00 0.00 00:08:16.635 00:08:17.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.615 Nvme0n1 : 6.00 6530.50 25.51 0.00 0.00 0.00 0.00 0.00 00:08:17.615 [2024-10-15T08:19:19.346Z] =================================================================================================================== 00:08:17.615 [2024-10-15T08:19:19.346Z] Total : 6530.50 25.51 0.00 0.00 0.00 0.00 0.00 00:08:17.615 00:08:18.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.550 Nvme0n1 : 7.00 6486.57 25.34 0.00 0.00 0.00 0.00 0.00 00:08:18.550 [2024-10-15T08:19:20.281Z] =================================================================================================================== 00:08:18.550 [2024-10-15T08:19:20.281Z] Total : 6486.57 25.34 0.00 0.00 0.00 0.00 0.00 00:08:18.550 00:08:19.484 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.484 Nvme0n1 : 8.00 6437.75 25.15 0.00 0.00 0.00 0.00 0.00 00:08:19.484 [2024-10-15T08:19:21.215Z] =================================================================================================================== 00:08:19.484 [2024-10-15T08:19:21.215Z] Total : 6437.75 25.15 0.00 0.00 0.00 0.00 0.00 00:08:19.484 00:08:20.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.490 Nvme0n1 : 9.00 6399.78 25.00 0.00 0.00 0.00 0.00 0.00 00:08:20.490 [2024-10-15T08:19:22.221Z] =================================================================================================================== 00:08:20.490 [2024-10-15T08:19:22.221Z] Total : 6399.78 25.00 0.00 0.00 0.00 0.00 0.00 00:08:20.490 00:08:21.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.863 Nvme0n1 : 10.00 6356.70 24.83 0.00 0.00 0.00 0.00 0.00 00:08:21.863 [2024-10-15T08:19:23.594Z] =================================================================================================================== 00:08:21.864 [2024-10-15T08:19:23.595Z] Total : 6356.70 24.83 0.00 0.00 0.00 0.00 0.00 00:08:21.864 00:08:21.864 00:08:21.864 Latency(us) 00:08:21.864 [2024-10-15T08:19:23.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.864 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.864 Nvme0n1 : 10.00 6367.11 24.87 0.00 0.00 20096.28 12213.53 67204.19 00:08:21.864 [2024-10-15T08:19:23.595Z] =================================================================================================================== 00:08:21.864 [2024-10-15T08:19:23.595Z] Total : 6367.11 24.87 0.00 0.00 20096.28 12213.53 67204.19 00:08:21.864 { 00:08:21.864 "results": [ 00:08:21.864 { 00:08:21.864 "job": "Nvme0n1", 00:08:21.864 "core_mask": "0x2", 00:08:21.864 "workload": "randwrite", 00:08:21.864 "status": "finished", 00:08:21.864 "queue_depth": 128, 00:08:21.864 "io_size": 4096, 00:08:21.864 "runtime": 10.003757, 00:08:21.864 "iops": 6367.107877570397, 00:08:21.864 "mibps": 24.871515146759364, 00:08:21.864 "io_failed": 0, 00:08:21.864 "io_timeout": 0, 00:08:21.864 "avg_latency_us": 20096.275399724538, 00:08:21.864 "min_latency_us": 12213.527272727273, 00:08:21.864 "max_latency_us": 67204.18909090909 00:08:21.864 } 00:08:21.864 ], 00:08:21.864 "core_count": 1 00:08:21.864 } 00:08:21.864 08:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63667 00:08:21.864 08:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 63667 ']' 00:08:21.864 08:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 63667 00:08:21.864 08:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:21.864 08:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:21.864 08:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63667 00:08:21.864 08:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:21.864 08:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:21.864 killing process with pid 63667 00:08:21.864 08:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63667' 00:08:21.864 Received shutdown signal, test time was about 10.000000 seconds 00:08:21.864 00:08:21.864 Latency(us) 00:08:21.864 [2024-10-15T08:19:23.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.864 [2024-10-15T08:19:23.595Z] =================================================================================================================== 00:08:21.864 [2024-10-15T08:19:23.595Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:21.864 08:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 63667 00:08:21.864 08:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 63667 00:08:21.864 08:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:22.430 08:19:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:22.689 08:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f93716e6-844e-4d1e-bbb9-6912864c0ae8 00:08:22.689 08:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:22.948 08:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:22.948 08:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:22.948 08:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:23.206 [2024-10-15 08:19:24.931847] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:23.466 08:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f93716e6-844e-4d1e-bbb9-6912864c0ae8 00:08:23.466 08:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:23.466 08:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f93716e6-844e-4d1e-bbb9-6912864c0ae8 00:08:23.466 08:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:23.466 08:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.466 08:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:23.466 08:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.466 08:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:23.466 08:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.466 08:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:23.466 08:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:23.466 08:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f93716e6-844e-4d1e-bbb9-6912864c0ae8 00:08:23.725 request: 00:08:23.725 { 00:08:23.725 "uuid": "f93716e6-844e-4d1e-bbb9-6912864c0ae8", 00:08:23.725 "method": "bdev_lvol_get_lvstores", 00:08:23.725 "req_id": 1 00:08:23.725 } 00:08:23.725 Got JSON-RPC error response 00:08:23.725 response: 00:08:23.725 { 00:08:23.725 "code": -19, 00:08:23.725 "message": "No such device" 00:08:23.725 } 00:08:23.725 08:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:23.725 08:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:23.725 08:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:23.725 08:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:23.725 08:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:23.983 aio_bdev 00:08:23.983 08:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 34251f51-73cf-4a69-8d1f-dc6397540a16 00:08:23.983 08:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=34251f51-73cf-4a69-8d1f-dc6397540a16 00:08:23.983 08:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:23.983 08:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:23.983 08:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:23.983 08:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:23.984 08:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:24.242 08:19:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 34251f51-73cf-4a69-8d1f-dc6397540a16 -t 2000 00:08:24.501 [ 00:08:24.501 { 00:08:24.501 "name": "34251f51-73cf-4a69-8d1f-dc6397540a16", 00:08:24.501 "aliases": [ 00:08:24.501 "lvs/lvol" 00:08:24.501 ], 00:08:24.501 "product_name": "Logical Volume", 00:08:24.501 "block_size": 4096, 00:08:24.501 "num_blocks": 38912, 00:08:24.501 "uuid": "34251f51-73cf-4a69-8d1f-dc6397540a16", 00:08:24.501 "assigned_rate_limits": { 00:08:24.501 "rw_ios_per_sec": 0, 00:08:24.501 "rw_mbytes_per_sec": 0, 00:08:24.501 "r_mbytes_per_sec": 0, 00:08:24.501 "w_mbytes_per_sec": 0 00:08:24.501 }, 00:08:24.501 "claimed": false, 00:08:24.501 "zoned": false, 00:08:24.501 "supported_io_types": { 00:08:24.501 "read": true, 00:08:24.501 "write": true, 00:08:24.501 "unmap": true, 00:08:24.501 "flush": false, 00:08:24.501 "reset": true, 00:08:24.501 "nvme_admin": false, 00:08:24.501 "nvme_io": false, 00:08:24.501 "nvme_io_md": false, 00:08:24.501 "write_zeroes": true, 00:08:24.501 "zcopy": false, 00:08:24.501 "get_zone_info": false, 00:08:24.501 "zone_management": false, 00:08:24.501 "zone_append": false, 00:08:24.501 "compare": false, 00:08:24.501 "compare_and_write": false, 00:08:24.501 "abort": false, 00:08:24.501 "seek_hole": true, 00:08:24.501 "seek_data": true, 00:08:24.501 "copy": false, 00:08:24.501 "nvme_iov_md": false 00:08:24.501 }, 00:08:24.501 "driver_specific": { 00:08:24.501 "lvol": { 00:08:24.501 "lvol_store_uuid": "f93716e6-844e-4d1e-bbb9-6912864c0ae8", 00:08:24.501 "base_bdev": "aio_bdev", 00:08:24.501 "thin_provision": false, 00:08:24.501 "num_allocated_clusters": 38, 00:08:24.501 "snapshot": false, 00:08:24.501 "clone": false, 00:08:24.501 "esnap_clone": false 00:08:24.501 } 00:08:24.501 } 00:08:24.501 } 00:08:24.501 ] 00:08:24.501 08:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:24.501 08:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:24.501 08:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f93716e6-844e-4d1e-bbb9-6912864c0ae8 00:08:25.068 08:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:25.068 08:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f93716e6-844e-4d1e-bbb9-6912864c0ae8 00:08:25.068 08:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:25.326 08:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:25.326 08:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 34251f51-73cf-4a69-8d1f-dc6397540a16 00:08:25.583 08:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f93716e6-844e-4d1e-bbb9-6912864c0ae8 00:08:26.153 08:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:26.409 08:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:26.667 ************************************ 00:08:26.667 END TEST lvs_grow_clean 00:08:26.667 ************************************ 00:08:26.667 00:08:26.667 real 0m19.412s 00:08:26.667 user 0m18.043s 00:08:26.667 sys 0m2.930s 00:08:26.667 08:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.667 08:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:26.667 08:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:26.667 08:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:26.667 08:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.667 08:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.667 ************************************ 00:08:26.667 START TEST lvs_grow_dirty 00:08:26.667 ************************************ 00:08:26.667 08:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:26.667 08:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:26.667 08:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:26.667 08:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:26.667 08:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:26.667 08:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:26.667 08:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:26.667 08:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:26.667 08:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:26.667 08:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:27.232 08:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:27.232 08:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:27.491 08:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4606c5eb-31f9-4622-b01c-ffe8c01fdabb 00:08:27.491 08:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4606c5eb-31f9-4622-b01c-ffe8c01fdabb 00:08:27.491 08:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:27.750 08:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:27.750 08:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:27.750 08:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4606c5eb-31f9-4622-b01c-ffe8c01fdabb lvol 150 00:08:28.008 08:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b411d060-ad34-41b6-bd7f-e9379f6b1ccf 00:08:28.008 08:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:28.008 08:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:28.266 [2024-10-15 08:19:29.827025] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:28.266 [2024-10-15 08:19:29.827146] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:28.266 true 00:08:28.266 08:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4606c5eb-31f9-4622-b01c-ffe8c01fdabb 00:08:28.266 08:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:28.525 08:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:28.525 08:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:28.798 08:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b411d060-ad34-41b6-bd7f-e9379f6b1ccf 00:08:29.056 08:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:29.315 [2024-10-15 08:19:30.947756] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:29.315 08:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:29.573 08:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63947 00:08:29.573 08:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:29.573 08:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:29.573 08:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63947 /var/tmp/bdevperf.sock 00:08:29.573 08:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 63947 ']' 00:08:29.573 08:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:29.573 08:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.573 08:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:29.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:29.573 08:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.573 08:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:29.573 [2024-10-15 08:19:31.288760] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:08:29.573 [2024-10-15 08:19:31.289838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63947 ] 00:08:29.832 [2024-10-15 08:19:31.435923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.832 [2024-10-15 08:19:31.515944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.091 [2024-10-15 08:19:31.588607] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.657 08:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:30.657 08:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:30.657 08:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:31.224 Nvme0n1 00:08:31.224 08:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:31.482 [ 00:08:31.482 { 00:08:31.483 "name": "Nvme0n1", 00:08:31.483 "aliases": [ 00:08:31.483 "b411d060-ad34-41b6-bd7f-e9379f6b1ccf" 00:08:31.483 ], 00:08:31.483 "product_name": "NVMe disk", 00:08:31.483 "block_size": 4096, 00:08:31.483 "num_blocks": 38912, 00:08:31.483 "uuid": "b411d060-ad34-41b6-bd7f-e9379f6b1ccf", 00:08:31.483 "numa_id": -1, 00:08:31.483 "assigned_rate_limits": { 00:08:31.483 "rw_ios_per_sec": 0, 00:08:31.483 "rw_mbytes_per_sec": 0, 00:08:31.483 "r_mbytes_per_sec": 0, 00:08:31.483 "w_mbytes_per_sec": 0 00:08:31.483 }, 00:08:31.483 "claimed": false, 00:08:31.483 "zoned": false, 00:08:31.483 "supported_io_types": { 00:08:31.483 "read": true, 00:08:31.483 "write": true, 00:08:31.483 "unmap": true, 00:08:31.483 "flush": true, 00:08:31.483 "reset": true, 00:08:31.483 "nvme_admin": true, 00:08:31.483 "nvme_io": true, 00:08:31.483 "nvme_io_md": false, 00:08:31.483 "write_zeroes": true, 00:08:31.483 "zcopy": false, 00:08:31.483 "get_zone_info": false, 00:08:31.483 "zone_management": false, 00:08:31.483 "zone_append": false, 00:08:31.483 "compare": true, 00:08:31.483 "compare_and_write": true, 00:08:31.483 "abort": true, 00:08:31.483 "seek_hole": false, 00:08:31.483 "seek_data": false, 00:08:31.483 "copy": true, 00:08:31.483 "nvme_iov_md": false 00:08:31.483 }, 00:08:31.483 "memory_domains": [ 00:08:31.483 { 00:08:31.483 "dma_device_id": "system", 00:08:31.483 "dma_device_type": 1 00:08:31.483 } 00:08:31.483 ], 00:08:31.483 "driver_specific": { 00:08:31.483 "nvme": [ 00:08:31.483 { 00:08:31.483 "trid": { 00:08:31.483 "trtype": "TCP", 00:08:31.483 "adrfam": "IPv4", 00:08:31.483 "traddr": "10.0.0.3", 00:08:31.483 "trsvcid": "4420", 00:08:31.483 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:31.483 }, 00:08:31.483 "ctrlr_data": { 00:08:31.483 "cntlid": 1, 00:08:31.483 "vendor_id": "0x8086", 00:08:31.483 "model_number": "SPDK bdev Controller", 00:08:31.483 "serial_number": "SPDK0", 00:08:31.483 "firmware_revision": "25.01", 00:08:31.483 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:31.483 "oacs": { 00:08:31.483 "security": 0, 00:08:31.483 "format": 0, 00:08:31.483 "firmware": 0, 00:08:31.483 "ns_manage": 0 00:08:31.483 }, 00:08:31.483 "multi_ctrlr": true, 00:08:31.483 "ana_reporting": false 00:08:31.483 }, 00:08:31.483 "vs": { 00:08:31.483 "nvme_version": "1.3" 00:08:31.483 }, 00:08:31.483 "ns_data": { 00:08:31.483 "id": 1, 00:08:31.483 "can_share": true 00:08:31.483 } 00:08:31.483 } 00:08:31.483 ], 00:08:31.483 "mp_policy": "active_passive" 00:08:31.483 } 00:08:31.483 } 00:08:31.483 ] 00:08:31.483 08:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63976 00:08:31.483 08:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:31.483 08:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:31.483 Running I/O for 10 seconds... 00:08:32.443 Latency(us) 00:08:32.443 [2024-10-15T08:19:34.174Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.443 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.443 Nvme0n1 : 1.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:32.443 [2024-10-15T08:19:34.174Z] =================================================================================================================== 00:08:32.443 [2024-10-15T08:19:34.174Z] Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:32.443 00:08:33.377 08:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4606c5eb-31f9-4622-b01c-ffe8c01fdabb 00:08:33.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.635 Nvme0n1 : 2.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:08:33.635 [2024-10-15T08:19:35.366Z] =================================================================================================================== 00:08:33.635 [2024-10-15T08:19:35.366Z] Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:08:33.635 00:08:33.893 true 00:08:33.893 08:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4606c5eb-31f9-4622-b01c-ffe8c01fdabb 00:08:33.893 08:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:34.152 08:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:34.152 08:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:34.152 08:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63976 00:08:34.721 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.721 Nvme0n1 : 3.00 6815.67 26.62 0.00 0.00 0.00 0.00 0.00 00:08:34.721 [2024-10-15T08:19:36.452Z] =================================================================================================================== 00:08:34.721 [2024-10-15T08:19:36.452Z] Total : 6815.67 26.62 0.00 0.00 0.00 0.00 0.00 00:08:34.721 00:08:35.659 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.659 Nvme0n1 : 4.00 6889.75 26.91 0.00 0.00 0.00 0.00 0.00 00:08:35.659 [2024-10-15T08:19:37.390Z] =================================================================================================================== 00:08:35.659 [2024-10-15T08:19:37.390Z] Total : 6889.75 26.91 0.00 0.00 0.00 0.00 0.00 00:08:35.659 00:08:36.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.596 Nvme0n1 : 5.00 6693.20 26.15 0.00 0.00 0.00 0.00 0.00 00:08:36.596 [2024-10-15T08:19:38.327Z] =================================================================================================================== 00:08:36.596 [2024-10-15T08:19:38.327Z] Total : 6693.20 26.15 0.00 0.00 0.00 0.00 0.00 00:08:36.596 00:08:37.586 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.586 Nvme0n1 : 6.00 6657.17 26.00 0.00 0.00 0.00 0.00 0.00 00:08:37.586 [2024-10-15T08:19:39.317Z] =================================================================================================================== 00:08:37.586 [2024-10-15T08:19:39.317Z] Total : 6657.17 26.00 0.00 0.00 0.00 0.00 0.00 00:08:37.586 00:08:38.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.519 Nvme0n1 : 7.00 6667.71 26.05 0.00 0.00 0.00 0.00 0.00 00:08:38.519 [2024-10-15T08:19:40.250Z] =================================================================================================================== 00:08:38.519 [2024-10-15T08:19:40.250Z] Total : 6667.71 26.05 0.00 0.00 0.00 0.00 0.00 00:08:38.519 00:08:39.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.464 Nvme0n1 : 8.00 6675.62 26.08 0.00 0.00 0.00 0.00 0.00 00:08:39.464 [2024-10-15T08:19:41.195Z] =================================================================================================================== 00:08:39.464 [2024-10-15T08:19:41.195Z] Total : 6675.62 26.08 0.00 0.00 0.00 0.00 0.00 00:08:39.464 00:08:40.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.843 Nvme0n1 : 9.00 6695.89 26.16 0.00 0.00 0.00 0.00 0.00 00:08:40.843 [2024-10-15T08:19:42.574Z] =================================================================================================================== 00:08:40.843 [2024-10-15T08:19:42.574Z] Total : 6695.89 26.16 0.00 0.00 0.00 0.00 0.00 00:08:40.843 00:08:41.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.782 Nvme0n1 : 10.00 6610.50 25.82 0.00 0.00 0.00 0.00 0.00 00:08:41.782 [2024-10-15T08:19:43.513Z] =================================================================================================================== 00:08:41.782 [2024-10-15T08:19:43.513Z] Total : 6610.50 25.82 0.00 0.00 0.00 0.00 0.00 00:08:41.782 00:08:41.782 00:08:41.782 Latency(us) 00:08:41.782 [2024-10-15T08:19:43.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.782 Nvme0n1 : 10.02 6609.31 25.82 0.00 0.00 19361.52 11021.96 185883.93 00:08:41.782 [2024-10-15T08:19:43.513Z] =================================================================================================================== 00:08:41.782 [2024-10-15T08:19:43.513Z] Total : 6609.31 25.82 0.00 0.00 19361.52 11021.96 185883.93 00:08:41.782 { 00:08:41.782 "results": [ 00:08:41.782 { 00:08:41.782 "job": "Nvme0n1", 00:08:41.782 "core_mask": "0x2", 00:08:41.782 "workload": "randwrite", 00:08:41.782 "status": "finished", 00:08:41.782 "queue_depth": 128, 00:08:41.782 "io_size": 4096, 00:08:41.782 "runtime": 10.021174, 00:08:41.782 "iops": 6609.30545662614, 00:08:41.782 "mibps": 25.81759943994586, 00:08:41.782 "io_failed": 0, 00:08:41.782 "io_timeout": 0, 00:08:41.782 "avg_latency_us": 19361.524273508265, 00:08:41.782 "min_latency_us": 11021.963636363636, 00:08:41.782 "max_latency_us": 185883.92727272728 00:08:41.782 } 00:08:41.782 ], 00:08:41.782 "core_count": 1 00:08:41.782 } 00:08:41.782 08:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63947 00:08:41.782 08:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 63947 ']' 00:08:41.782 08:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 63947 00:08:41.782 08:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:41.782 08:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.782 08:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63947 00:08:41.782 killing process with pid 63947 00:08:41.782 Received shutdown signal, test time was about 10.000000 seconds 00:08:41.782 00:08:41.782 Latency(us) 00:08:41.782 [2024-10-15T08:19:43.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.782 [2024-10-15T08:19:43.513Z] =================================================================================================================== 00:08:41.782 [2024-10-15T08:19:43.514Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:41.783 08:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:41.783 08:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:41.783 08:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63947' 00:08:41.783 08:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 63947 00:08:41.783 08:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 63947 00:08:42.041 08:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:42.305 08:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:42.874 08:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4606c5eb-31f9-4622-b01c-ffe8c01fdabb 00:08:42.874 08:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:43.132 08:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:43.132 08:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:43.132 08:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63579 00:08:43.132 08:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63579 00:08:43.132 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63579 Killed "${NVMF_APP[@]}" "$@" 00:08:43.132 08:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:43.132 08:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:43.132 08:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:43.132 08:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:43.132 08:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:43.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.132 08:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=64114 00:08:43.132 08:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:43.132 08:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 64114 00:08:43.132 08:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 64114 ']' 00:08:43.132 08:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.132 08:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:43.132 08:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.132 08:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:43.132 08:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:43.132 [2024-10-15 08:19:44.763745] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:08:43.132 [2024-10-15 08:19:44.764137] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.392 [2024-10-15 08:19:44.896795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.392 [2024-10-15 08:19:44.980166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.392 [2024-10-15 08:19:44.980237] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.392 [2024-10-15 08:19:44.980249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.392 [2024-10-15 08:19:44.980258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.392 [2024-10-15 08:19:44.980265] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.392 [2024-10-15 08:19:44.980743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.392 [2024-10-15 08:19:45.053947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:43.650 08:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:43.650 08:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:43.650 08:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:43.650 08:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:43.650 08:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:43.650 08:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.650 08:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:43.909 [2024-10-15 08:19:45.482017] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:43.909 [2024-10-15 08:19:45.482301] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:43.909 [2024-10-15 08:19:45.482545] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:43.909 08:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:43.909 08:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b411d060-ad34-41b6-bd7f-e9379f6b1ccf 00:08:43.909 08:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=b411d060-ad34-41b6-bd7f-e9379f6b1ccf 00:08:43.909 08:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:43.909 08:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:43.909 08:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:43.909 08:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:43.909 08:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:44.168 08:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b411d060-ad34-41b6-bd7f-e9379f6b1ccf -t 2000 00:08:44.427 [ 00:08:44.427 { 00:08:44.427 "name": "b411d060-ad34-41b6-bd7f-e9379f6b1ccf", 00:08:44.427 "aliases": [ 00:08:44.427 "lvs/lvol" 00:08:44.427 ], 00:08:44.427 "product_name": "Logical Volume", 00:08:44.427 "block_size": 4096, 00:08:44.427 "num_blocks": 38912, 00:08:44.427 "uuid": "b411d060-ad34-41b6-bd7f-e9379f6b1ccf", 00:08:44.427 "assigned_rate_limits": { 00:08:44.427 "rw_ios_per_sec": 0, 00:08:44.427 "rw_mbytes_per_sec": 0, 00:08:44.427 "r_mbytes_per_sec": 0, 00:08:44.427 "w_mbytes_per_sec": 0 00:08:44.427 }, 00:08:44.427 "claimed": false, 00:08:44.427 "zoned": false, 00:08:44.427 "supported_io_types": { 00:08:44.427 "read": true, 00:08:44.427 "write": true, 00:08:44.427 "unmap": true, 00:08:44.427 "flush": false, 00:08:44.427 "reset": true, 00:08:44.427 "nvme_admin": false, 00:08:44.427 "nvme_io": false, 00:08:44.427 "nvme_io_md": false, 00:08:44.427 "write_zeroes": true, 00:08:44.427 "zcopy": false, 00:08:44.427 "get_zone_info": false, 00:08:44.427 "zone_management": false, 00:08:44.427 "zone_append": false, 00:08:44.427 "compare": false, 00:08:44.427 "compare_and_write": false, 00:08:44.427 "abort": false, 00:08:44.427 "seek_hole": true, 00:08:44.427 "seek_data": true, 00:08:44.427 "copy": false, 00:08:44.427 "nvme_iov_md": false 00:08:44.427 }, 00:08:44.427 "driver_specific": { 00:08:44.427 "lvol": { 00:08:44.427 "lvol_store_uuid": "4606c5eb-31f9-4622-b01c-ffe8c01fdabb", 00:08:44.428 "base_bdev": "aio_bdev", 00:08:44.428 "thin_provision": false, 00:08:44.428 "num_allocated_clusters": 38, 00:08:44.428 "snapshot": false, 00:08:44.428 "clone": false, 00:08:44.428 "esnap_clone": false 00:08:44.428 } 00:08:44.428 } 00:08:44.428 } 00:08:44.428 ] 00:08:44.428 08:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:44.428 08:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4606c5eb-31f9-4622-b01c-ffe8c01fdabb 00:08:44.428 08:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:44.995 08:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:44.995 08:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4606c5eb-31f9-4622-b01c-ffe8c01fdabb 00:08:44.995 08:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:44.995 08:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:44.995 08:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:45.563 [2024-10-15 08:19:46.987251] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:45.563 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4606c5eb-31f9-4622-b01c-ffe8c01fdabb 00:08:45.563 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:45.563 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4606c5eb-31f9-4622-b01c-ffe8c01fdabb 00:08:45.563 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:45.563 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.563 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:45.563 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.563 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:45.563 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.563 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:45.563 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:45.563 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4606c5eb-31f9-4622-b01c-ffe8c01fdabb 00:08:45.563 request: 00:08:45.563 { 00:08:45.563 "uuid": "4606c5eb-31f9-4622-b01c-ffe8c01fdabb", 00:08:45.563 "method": "bdev_lvol_get_lvstores", 00:08:45.563 "req_id": 1 00:08:45.563 } 00:08:45.563 Got JSON-RPC error response 00:08:45.563 response: 00:08:45.563 { 00:08:45.563 "code": -19, 00:08:45.563 "message": "No such device" 00:08:45.563 } 00:08:45.822 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:45.822 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:45.822 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:45.822 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:45.822 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:46.081 aio_bdev 00:08:46.081 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b411d060-ad34-41b6-bd7f-e9379f6b1ccf 00:08:46.081 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=b411d060-ad34-41b6-bd7f-e9379f6b1ccf 00:08:46.081 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:46.081 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:46.081 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:46.081 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:46.081 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:46.339 08:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b411d060-ad34-41b6-bd7f-e9379f6b1ccf -t 2000 00:08:46.597 [ 00:08:46.597 { 00:08:46.597 "name": "b411d060-ad34-41b6-bd7f-e9379f6b1ccf", 00:08:46.597 "aliases": [ 00:08:46.597 "lvs/lvol" 00:08:46.597 ], 00:08:46.597 "product_name": "Logical Volume", 00:08:46.597 "block_size": 4096, 00:08:46.597 "num_blocks": 38912, 00:08:46.597 "uuid": "b411d060-ad34-41b6-bd7f-e9379f6b1ccf", 00:08:46.597 "assigned_rate_limits": { 00:08:46.597 "rw_ios_per_sec": 0, 00:08:46.597 "rw_mbytes_per_sec": 0, 00:08:46.597 "r_mbytes_per_sec": 0, 00:08:46.597 "w_mbytes_per_sec": 0 00:08:46.597 }, 00:08:46.597 "claimed": false, 00:08:46.597 "zoned": false, 00:08:46.597 "supported_io_types": { 00:08:46.597 "read": true, 00:08:46.597 "write": true, 00:08:46.597 "unmap": true, 00:08:46.597 "flush": false, 00:08:46.597 "reset": true, 00:08:46.597 "nvme_admin": false, 00:08:46.597 "nvme_io": false, 00:08:46.597 "nvme_io_md": false, 00:08:46.597 "write_zeroes": true, 00:08:46.597 "zcopy": false, 00:08:46.597 "get_zone_info": false, 00:08:46.597 "zone_management": false, 00:08:46.597 "zone_append": false, 00:08:46.597 "compare": false, 00:08:46.597 "compare_and_write": false, 00:08:46.597 "abort": false, 00:08:46.597 "seek_hole": true, 00:08:46.597 "seek_data": true, 00:08:46.597 "copy": false, 00:08:46.597 "nvme_iov_md": false 00:08:46.597 }, 00:08:46.597 "driver_specific": { 00:08:46.597 "lvol": { 00:08:46.597 "lvol_store_uuid": "4606c5eb-31f9-4622-b01c-ffe8c01fdabb", 00:08:46.597 "base_bdev": "aio_bdev", 00:08:46.597 "thin_provision": false, 00:08:46.597 "num_allocated_clusters": 38, 00:08:46.597 "snapshot": false, 00:08:46.597 "clone": false, 00:08:46.597 "esnap_clone": false 00:08:46.597 } 00:08:46.597 } 00:08:46.597 } 00:08:46.597 ] 00:08:46.597 08:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:46.597 08:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4606c5eb-31f9-4622-b01c-ffe8c01fdabb 00:08:46.597 08:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:46.856 08:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:46.856 08:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4606c5eb-31f9-4622-b01c-ffe8c01fdabb 00:08:46.856 08:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:47.115 08:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:47.115 08:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b411d060-ad34-41b6-bd7f-e9379f6b1ccf 00:08:47.372 08:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4606c5eb-31f9-4622-b01c-ffe8c01fdabb 00:08:47.631 08:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:47.889 08:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:48.489 ************************************ 00:08:48.489 END TEST lvs_grow_dirty 00:08:48.489 ************************************ 00:08:48.489 00:08:48.489 real 0m21.571s 00:08:48.489 user 0m46.646s 00:08:48.489 sys 0m8.136s 00:08:48.489 08:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.489 08:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:48.489 08:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:48.489 08:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:48.489 08:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:48.489 08:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:48.489 08:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:48.489 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:48.489 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:48.489 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:48.489 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:48.489 nvmf_trace.0 00:08:48.489 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:48.489 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:48.489 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:48.489 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:48.747 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:48.747 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:48.747 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.747 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:48.747 rmmod nvme_tcp 00:08:48.747 rmmod nvme_fabrics 00:08:48.747 rmmod nvme_keyring 00:08:48.747 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.747 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:48.747 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:48.747 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 64114 ']' 00:08:48.747 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 64114 00:08:48.747 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 64114 ']' 00:08:48.747 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 64114 00:08:48.747 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:48.747 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:48.747 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64114 00:08:48.747 killing process with pid 64114 00:08:48.747 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:48.747 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:48.747 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64114' 00:08:48.747 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 64114 00:08:48.747 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 64114 00:08:49.006 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:49.006 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:49.006 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:49.006 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:49.006 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:49.006 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:08:49.006 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:08:49.006 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:49.006 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:49.006 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:49.006 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:49.006 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:49.006 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:49.264 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:49.264 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:49.264 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:49.264 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:49.264 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:49.264 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:49.264 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:49.264 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:49.264 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:49.264 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:49.264 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.264 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.264 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.264 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:49.264 00:08:49.264 real 0m44.079s 00:08:49.264 user 1m11.388s 00:08:49.264 sys 0m12.108s 00:08:49.264 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.264 08:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:49.264 ************************************ 00:08:49.264 END TEST nvmf_lvs_grow 00:08:49.264 ************************************ 00:08:49.264 08:19:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:49.264 08:19:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:49.264 08:19:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.264 08:19:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:49.523 ************************************ 00:08:49.523 START TEST nvmf_bdev_io_wait 00:08:49.523 ************************************ 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:49.523 * Looking for test storage... 00:08:49.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.523 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:49.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.523 --rc genhtml_branch_coverage=1 00:08:49.523 --rc genhtml_function_coverage=1 00:08:49.523 --rc genhtml_legend=1 00:08:49.523 --rc geninfo_all_blocks=1 00:08:49.523 --rc geninfo_unexecuted_blocks=1 00:08:49.524 00:08:49.524 ' 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:49.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.524 --rc genhtml_branch_coverage=1 00:08:49.524 --rc genhtml_function_coverage=1 00:08:49.524 --rc genhtml_legend=1 00:08:49.524 --rc geninfo_all_blocks=1 00:08:49.524 --rc geninfo_unexecuted_blocks=1 00:08:49.524 00:08:49.524 ' 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:49.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.524 --rc genhtml_branch_coverage=1 00:08:49.524 --rc genhtml_function_coverage=1 00:08:49.524 --rc genhtml_legend=1 00:08:49.524 --rc geninfo_all_blocks=1 00:08:49.524 --rc geninfo_unexecuted_blocks=1 00:08:49.524 00:08:49.524 ' 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:49.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.524 --rc genhtml_branch_coverage=1 00:08:49.524 --rc genhtml_function_coverage=1 00:08:49.524 --rc genhtml_legend=1 00:08:49.524 --rc geninfo_all_blocks=1 00:08:49.524 --rc geninfo_unexecuted_blocks=1 00:08:49.524 00:08:49.524 ' 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:49.524 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # nvmf_veth_init 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:49.524 Cannot find device "nvmf_init_br" 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:49.524 Cannot find device "nvmf_init_br2" 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:49.524 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:49.783 Cannot find device "nvmf_tgt_br" 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:49.783 Cannot find device "nvmf_tgt_br2" 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:49.783 Cannot find device "nvmf_init_br" 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:49.783 Cannot find device "nvmf_init_br2" 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:49.783 Cannot find device "nvmf_tgt_br" 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:49.783 Cannot find device "nvmf_tgt_br2" 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:49.783 Cannot find device "nvmf_br" 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:49.783 Cannot find device "nvmf_init_if" 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:49.783 Cannot find device "nvmf_init_if2" 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:49.783 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:49.783 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:49.783 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:49.784 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:49.784 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:49.784 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:49.784 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:49.784 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:49.784 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:49.784 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:50.042 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:50.042 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:50.042 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:50.042 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:50.042 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:50.042 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:50.042 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:50.043 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:50.043 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.113 ms 00:08:50.043 00:08:50.043 --- 10.0.0.3 ping statistics --- 00:08:50.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.043 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:50.043 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:50.043 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:08:50.043 00:08:50.043 --- 10.0.0.4 ping statistics --- 00:08:50.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.043 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:50.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:08:50.043 00:08:50.043 --- 10.0.0.1 ping statistics --- 00:08:50.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.043 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:50.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:08:50.043 00:08:50.043 --- 10.0.0.2 ping statistics --- 00:08:50.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.043 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # return 0 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=64488 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 64488 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 64488 ']' 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:50.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:50.043 08:19:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:50.043 [2024-10-15 08:19:51.678198] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:08:50.043 [2024-10-15 08:19:51.678894] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.301 [2024-10-15 08:19:51.816816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.301 [2024-10-15 08:19:51.906560] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.301 [2024-10-15 08:19:51.906660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.301 [2024-10-15 08:19:51.906675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.301 [2024-10-15 08:19:51.906686] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.301 [2024-10-15 08:19:51.906695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.301 [2024-10-15 08:19:51.908340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.301 [2024-10-15 08:19:51.908465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.301 [2024-10-15 08:19:51.908565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.301 [2024-10-15 08:19:51.908567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.237 [2024-10-15 08:19:52.869282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.237 [2024-10-15 08:19:52.886618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.237 Malloc0 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.237 [2024-10-15 08:19:52.952155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64523 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64525 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:51.237 { 00:08:51.237 "params": { 00:08:51.237 "name": "Nvme$subsystem", 00:08:51.237 "trtype": "$TEST_TRANSPORT", 00:08:51.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.237 "adrfam": "ipv4", 00:08:51.237 "trsvcid": "$NVMF_PORT", 00:08:51.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.237 "hdgst": ${hdgst:-false}, 00:08:51.237 "ddgst": ${ddgst:-false} 00:08:51.237 }, 00:08:51.237 "method": "bdev_nvme_attach_controller" 00:08:51.237 } 00:08:51.237 EOF 00:08:51.237 )") 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64527 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:51.237 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:51.497 { 00:08:51.497 "params": { 00:08:51.497 "name": "Nvme$subsystem", 00:08:51.497 "trtype": "$TEST_TRANSPORT", 00:08:51.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.497 "adrfam": "ipv4", 00:08:51.497 "trsvcid": "$NVMF_PORT", 00:08:51.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.497 "hdgst": ${hdgst:-false}, 00:08:51.497 "ddgst": ${ddgst:-false} 00:08:51.497 }, 00:08:51.497 "method": "bdev_nvme_attach_controller" 00:08:51.497 } 00:08:51.497 EOF 00:08:51.497 )") 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:51.497 { 00:08:51.497 "params": { 00:08:51.497 "name": "Nvme$subsystem", 00:08:51.497 "trtype": "$TEST_TRANSPORT", 00:08:51.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.497 "adrfam": "ipv4", 00:08:51.497 "trsvcid": "$NVMF_PORT", 00:08:51.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.497 "hdgst": ${hdgst:-false}, 00:08:51.497 "ddgst": ${ddgst:-false} 00:08:51.497 }, 00:08:51.497 "method": "bdev_nvme_attach_controller" 00:08:51.497 } 00:08:51.497 EOF 00:08:51.497 )") 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:51.497 { 00:08:51.497 "params": { 00:08:51.497 "name": "Nvme$subsystem", 00:08:51.497 "trtype": "$TEST_TRANSPORT", 00:08:51.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:51.497 "adrfam": "ipv4", 00:08:51.497 "trsvcid": "$NVMF_PORT", 00:08:51.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:51.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:51.497 "hdgst": ${hdgst:-false}, 00:08:51.497 "ddgst": ${ddgst:-false} 00:08:51.497 }, 00:08:51.497 "method": "bdev_nvme_attach_controller" 00:08:51.497 } 00:08:51.497 EOF 00:08:51.497 )") 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64530 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:51.497 "params": { 00:08:51.497 "name": "Nvme1", 00:08:51.497 "trtype": "tcp", 00:08:51.497 "traddr": "10.0.0.3", 00:08:51.497 "adrfam": "ipv4", 00:08:51.497 "trsvcid": "4420", 00:08:51.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.497 "hdgst": false, 00:08:51.497 "ddgst": false 00:08:51.497 }, 00:08:51.497 "method": "bdev_nvme_attach_controller" 00:08:51.497 }' 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:51.497 "params": { 00:08:51.497 "name": "Nvme1", 00:08:51.497 "trtype": "tcp", 00:08:51.497 "traddr": "10.0.0.3", 00:08:51.497 "adrfam": "ipv4", 00:08:51.497 "trsvcid": "4420", 00:08:51.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.497 "hdgst": false, 00:08:51.497 "ddgst": false 00:08:51.497 }, 00:08:51.497 "method": "bdev_nvme_attach_controller" 00:08:51.497 }' 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:51.497 "params": { 00:08:51.497 "name": "Nvme1", 00:08:51.497 "trtype": "tcp", 00:08:51.497 "traddr": "10.0.0.3", 00:08:51.497 "adrfam": "ipv4", 00:08:51.497 "trsvcid": "4420", 00:08:51.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.497 "hdgst": false, 00:08:51.497 "ddgst": false 00:08:51.497 }, 00:08:51.497 "method": "bdev_nvme_attach_controller" 00:08:51.497 }' 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:51.497 08:19:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:51.497 "params": { 00:08:51.497 "name": "Nvme1", 00:08:51.497 "trtype": "tcp", 00:08:51.497 "traddr": "10.0.0.3", 00:08:51.497 "adrfam": "ipv4", 00:08:51.497 "trsvcid": "4420", 00:08:51.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:51.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:51.498 "hdgst": false, 00:08:51.498 "ddgst": false 00:08:51.498 }, 00:08:51.498 "method": "bdev_nvme_attach_controller" 00:08:51.498 }' 00:08:51.498 [2024-10-15 08:19:53.013003] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:08:51.498 [2024-10-15 08:19:53.013098] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:51.498 [2024-10-15 08:19:53.026600] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:08:51.498 [2024-10-15 08:19:53.026912] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:51.498 08:19:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64523 00:08:51.498 [2024-10-15 08:19:53.069222] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:08:51.498 [2024-10-15 08:19:53.069633] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:51.498 [2024-10-15 08:19:53.075420] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:08:51.498 [2024-10-15 08:19:53.075523] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:51.757 [2024-10-15 08:19:53.239158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.757 [2024-10-15 08:19:53.308464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:51.757 [2024-10-15 08:19:53.322490] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.757 [2024-10-15 08:19:53.331949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.757 [2024-10-15 08:19:53.394700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:51.757 [2024-10-15 08:19:53.407392] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.757 [2024-10-15 08:19:53.429080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.016 [2024-10-15 08:19:53.491032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:52.016 [2024-10-15 08:19:53.503662] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.016 [2024-10-15 08:19:53.525903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.016 Running I/O for 1 seconds... 00:08:52.016 Running I/O for 1 seconds... 00:08:52.016 [2024-10-15 08:19:53.586556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:52.016 [2024-10-15 08:19:53.599127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.016 Running I/O for 1 seconds... 00:08:52.016 Running I/O for 1 seconds... 00:08:52.951 9024.00 IOPS, 35.25 MiB/s 00:08:52.951 Latency(us) 00:08:52.951 [2024-10-15T08:19:54.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.951 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:52.951 Nvme1n1 : 1.01 9063.30 35.40 0.00 0.00 14047.94 7447.27 18469.24 00:08:52.951 [2024-10-15T08:19:54.682Z] =================================================================================================================== 00:08:52.951 [2024-10-15T08:19:54.682Z] Total : 9063.30 35.40 0.00 0.00 14047.94 7447.27 18469.24 00:08:52.951 8150.00 IOPS, 31.84 MiB/s 00:08:52.951 Latency(us) 00:08:52.951 [2024-10-15T08:19:54.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.951 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:52.951 Nvme1n1 : 1.01 8222.64 32.12 0.00 0.00 15494.94 7745.16 23712.12 00:08:52.951 [2024-10-15T08:19:54.682Z] =================================================================================================================== 00:08:52.951 [2024-10-15T08:19:54.682Z] Total : 8222.64 32.12 0.00 0.00 15494.94 7745.16 23712.12 00:08:52.951 175472.00 IOPS, 685.44 MiB/s 00:08:52.951 Latency(us) 00:08:52.951 [2024-10-15T08:19:54.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.951 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:52.951 Nvme1n1 : 1.00 175119.43 684.06 0.00 0.00 727.15 368.64 1980.97 00:08:52.951 [2024-10-15T08:19:54.682Z] =================================================================================================================== 00:08:52.951 [2024-10-15T08:19:54.682Z] Total : 175119.43 684.06 0.00 0.00 727.15 368.64 1980.97 00:08:53.209 8191.00 IOPS, 32.00 MiB/s 00:08:53.209 Latency(us) 00:08:53.209 [2024-10-15T08:19:54.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.209 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:53.209 Nvme1n1 : 1.01 8260.70 32.27 0.00 0.00 15420.86 4200.26 25261.15 00:08:53.209 [2024-10-15T08:19:54.940Z] =================================================================================================================== 00:08:53.209 [2024-10-15T08:19:54.940Z] Total : 8260.70 32.27 0.00 0.00 15420.86 4200.26 25261.15 00:08:53.467 08:19:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64525 00:08:53.467 08:19:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64527 00:08:53.467 08:19:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64530 00:08:53.467 08:19:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:53.467 08:19:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.467 08:19:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.467 08:19:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.467 08:19:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:53.467 08:19:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:53.467 08:19:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:53.467 08:19:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:53.467 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:53.467 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:53.467 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:53.467 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:53.467 rmmod nvme_tcp 00:08:53.467 rmmod nvme_fabrics 00:08:53.467 rmmod nvme_keyring 00:08:53.467 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:53.467 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:53.467 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:53.467 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 64488 ']' 00:08:53.467 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 64488 00:08:53.467 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 64488 ']' 00:08:53.467 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 64488 00:08:53.467 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:53.467 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:53.467 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64488 00:08:53.467 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:53.467 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:53.467 killing process with pid 64488 00:08:53.467 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64488' 00:08:53.467 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 64488 00:08:53.467 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 64488 00:08:53.725 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:53.725 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:53.725 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:53.725 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:53.725 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:53.725 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:08:53.725 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:08:53.725 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:53.725 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:53.725 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:53.725 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:53.725 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:53.725 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:53.725 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:53.725 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:53.725 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:53.725 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:53.725 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:53.984 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:53.984 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:53.984 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:53.984 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:53.984 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:53.984 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.984 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.984 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.984 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:53.984 00:08:53.984 real 0m4.597s 00:08:53.984 user 0m18.336s 00:08:53.984 sys 0m2.588s 00:08:53.984 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.984 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.984 ************************************ 00:08:53.984 END TEST nvmf_bdev_io_wait 00:08:53.984 ************************************ 00:08:53.984 08:19:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:53.984 08:19:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:53.984 08:19:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.984 08:19:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.984 ************************************ 00:08:53.984 START TEST nvmf_queue_depth 00:08:53.984 ************************************ 00:08:53.984 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:54.292 * Looking for test storage... 00:08:54.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:54.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.292 --rc genhtml_branch_coverage=1 00:08:54.292 --rc genhtml_function_coverage=1 00:08:54.292 --rc genhtml_legend=1 00:08:54.292 --rc geninfo_all_blocks=1 00:08:54.292 --rc geninfo_unexecuted_blocks=1 00:08:54.292 00:08:54.292 ' 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:54.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.292 --rc genhtml_branch_coverage=1 00:08:54.292 --rc genhtml_function_coverage=1 00:08:54.292 --rc genhtml_legend=1 00:08:54.292 --rc geninfo_all_blocks=1 00:08:54.292 --rc geninfo_unexecuted_blocks=1 00:08:54.292 00:08:54.292 ' 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:54.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.292 --rc genhtml_branch_coverage=1 00:08:54.292 --rc genhtml_function_coverage=1 00:08:54.292 --rc genhtml_legend=1 00:08:54.292 --rc geninfo_all_blocks=1 00:08:54.292 --rc geninfo_unexecuted_blocks=1 00:08:54.292 00:08:54.292 ' 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:54.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.292 --rc genhtml_branch_coverage=1 00:08:54.292 --rc genhtml_function_coverage=1 00:08:54.292 --rc genhtml_legend=1 00:08:54.292 --rc geninfo_all_blocks=1 00:08:54.292 --rc geninfo_unexecuted_blocks=1 00:08:54.292 00:08:54.292 ' 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.292 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:54.293 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # nvmf_veth_init 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:54.293 Cannot find device "nvmf_init_br" 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:54.293 Cannot find device "nvmf_init_br2" 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:54.293 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:54.293 Cannot find device "nvmf_tgt_br" 00:08:54.294 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:54.294 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:54.294 Cannot find device "nvmf_tgt_br2" 00:08:54.294 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:54.294 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:54.294 Cannot find device "nvmf_init_br" 00:08:54.294 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:54.294 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:54.294 Cannot find device "nvmf_init_br2" 00:08:54.294 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:54.294 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:54.294 Cannot find device "nvmf_tgt_br" 00:08:54.294 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:54.294 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:54.294 Cannot find device "nvmf_tgt_br2" 00:08:54.294 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:54.294 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:54.294 Cannot find device "nvmf_br" 00:08:54.294 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:54.294 08:19:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:54.294 Cannot find device "nvmf_init_if" 00:08:54.294 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:54.294 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:54.294 Cannot find device "nvmf_init_if2" 00:08:54.294 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:54.294 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:54.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:54.294 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:54.294 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:54.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:54.553 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:54.813 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:54.813 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:08:54.813 00:08:54.813 --- 10.0.0.3 ping statistics --- 00:08:54.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.813 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:54.813 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:54.813 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:08:54.813 00:08:54.813 --- 10.0.0.4 ping statistics --- 00:08:54.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.813 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:54.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:08:54.813 00:08:54.813 --- 10.0.0.1 ping statistics --- 00:08:54.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.813 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:54.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:08:54.813 00:08:54.813 --- 10.0.0.2 ping statistics --- 00:08:54.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.813 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # return 0 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=64817 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 64817 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 64817 ']' 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.813 08:19:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.813 [2024-10-15 08:19:56.424160] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:08:54.813 [2024-10-15 08:19:56.424277] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.073 [2024-10-15 08:19:56.572263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.073 [2024-10-15 08:19:56.658859] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.073 [2024-10-15 08:19:56.658953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.073 [2024-10-15 08:19:56.658982] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.073 [2024-10-15 08:19:56.658993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.073 [2024-10-15 08:19:56.659002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.073 [2024-10-15 08:19:56.659561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.073 [2024-10-15 08:19:56.734796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.010 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:56.010 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:56.010 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:56.010 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:56.010 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:56.010 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.010 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:56.010 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.010 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:56.010 [2024-10-15 08:19:57.515220] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.010 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.010 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:56.010 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.010 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:56.010 Malloc0 00:08:56.010 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.010 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:56.010 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.010 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:56.010 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.010 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:56.010 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.010 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:56.010 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.011 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:56.011 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.011 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:56.011 [2024-10-15 08:19:57.572095] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:56.011 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.011 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64856 00:08:56.011 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:56.011 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:56.011 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64856 /var/tmp/bdevperf.sock 00:08:56.011 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 64856 ']' 00:08:56.011 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:56.011 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:56.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:56.011 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:56.011 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:56.011 08:19:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:56.011 [2024-10-15 08:19:57.629757] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:08:56.011 [2024-10-15 08:19:57.629864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64856 ] 00:08:56.269 [2024-10-15 08:19:57.762232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.269 [2024-10-15 08:19:57.841039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.270 [2024-10-15 08:19:57.914751] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:57.205 08:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.205 08:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:57.205 08:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:57.205 08:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.205 08:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:57.205 NVMe0n1 00:08:57.205 08:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.205 08:19:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:57.205 Running I/O for 10 seconds... 00:08:59.515 6172.00 IOPS, 24.11 MiB/s [2024-10-15T08:20:02.239Z] 6839.50 IOPS, 26.72 MiB/s [2024-10-15T08:20:03.174Z] 7104.67 IOPS, 27.75 MiB/s [2024-10-15T08:20:04.110Z] 7249.25 IOPS, 28.32 MiB/s [2024-10-15T08:20:05.044Z] 7418.40 IOPS, 28.98 MiB/s [2024-10-15T08:20:05.980Z] 7540.83 IOPS, 29.46 MiB/s [2024-10-15T08:20:06.914Z] 7641.43 IOPS, 29.85 MiB/s [2024-10-15T08:20:08.289Z] 7712.50 IOPS, 30.13 MiB/s [2024-10-15T08:20:09.223Z] 7767.44 IOPS, 30.34 MiB/s [2024-10-15T08:20:09.223Z] 7810.90 IOPS, 30.51 MiB/s 00:09:07.492 Latency(us) 00:09:07.492 [2024-10-15T08:20:09.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.492 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:07.492 Verification LBA range: start 0x0 length 0x4000 00:09:07.492 NVMe0n1 : 10.08 7852.21 30.67 0.00 0.00 129781.81 21090.68 100091.35 00:09:07.492 [2024-10-15T08:20:09.223Z] =================================================================================================================== 00:09:07.492 [2024-10-15T08:20:09.223Z] Total : 7852.21 30.67 0.00 0.00 129781.81 21090.68 100091.35 00:09:07.492 { 00:09:07.492 "results": [ 00:09:07.492 { 00:09:07.492 "job": "NVMe0n1", 00:09:07.492 "core_mask": "0x1", 00:09:07.492 "workload": "verify", 00:09:07.492 "status": "finished", 00:09:07.492 "verify_range": { 00:09:07.492 "start": 0, 00:09:07.492 "length": 16384 00:09:07.492 }, 00:09:07.492 "queue_depth": 1024, 00:09:07.492 "io_size": 4096, 00:09:07.492 "runtime": 10.076393, 00:09:07.492 "iops": 7852.214577180544, 00:09:07.492 "mibps": 30.6727131921115, 00:09:07.492 "io_failed": 0, 00:09:07.492 "io_timeout": 0, 00:09:07.492 "avg_latency_us": 129781.8110622261, 00:09:07.492 "min_latency_us": 21090.676363636365, 00:09:07.492 "max_latency_us": 100091.34545454546 00:09:07.492 } 00:09:07.492 ], 00:09:07.492 "core_count": 1 00:09:07.492 } 00:09:07.492 08:20:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64856 00:09:07.492 08:20:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 64856 ']' 00:09:07.492 08:20:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 64856 00:09:07.492 08:20:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:07.492 08:20:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:07.492 08:20:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64856 00:09:07.492 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:07.492 killing process with pid 64856 00:09:07.492 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:07.492 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64856' 00:09:07.492 Received shutdown signal, test time was about 10.000000 seconds 00:09:07.492 00:09:07.492 Latency(us) 00:09:07.492 [2024-10-15T08:20:09.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.492 [2024-10-15T08:20:09.223Z] =================================================================================================================== 00:09:07.492 [2024-10-15T08:20:09.223Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:07.492 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 64856 00:09:07.492 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 64856 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:07.753 rmmod nvme_tcp 00:09:07.753 rmmod nvme_fabrics 00:09:07.753 rmmod nvme_keyring 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 64817 ']' 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 64817 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 64817 ']' 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 64817 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64817 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:07.753 killing process with pid 64817 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64817' 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 64817 00:09:07.753 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 64817 00:09:08.014 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:08.014 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:08.014 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:08.014 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:08.272 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:09:08.272 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:08.272 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:09:08.272 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:08.272 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:08.272 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:08.272 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:08.272 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:08.272 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:08.272 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:08.272 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:08.272 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:08.272 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:08.272 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:08.273 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:08.273 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:08.273 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:08.273 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:08.273 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:08.273 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.273 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.273 08:20:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:08.531 00:09:08.531 real 0m14.361s 00:09:08.531 user 0m24.171s 00:09:08.531 sys 0m2.499s 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.531 ************************************ 00:09:08.531 END TEST nvmf_queue_depth 00:09:08.531 ************************************ 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:08.531 ************************************ 00:09:08.531 START TEST nvmf_target_multipath 00:09:08.531 ************************************ 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:08.531 * Looking for test storage... 00:09:08.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:08.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.531 --rc genhtml_branch_coverage=1 00:09:08.531 --rc genhtml_function_coverage=1 00:09:08.531 --rc genhtml_legend=1 00:09:08.531 --rc geninfo_all_blocks=1 00:09:08.531 --rc geninfo_unexecuted_blocks=1 00:09:08.531 00:09:08.531 ' 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:08.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.531 --rc genhtml_branch_coverage=1 00:09:08.531 --rc genhtml_function_coverage=1 00:09:08.531 --rc genhtml_legend=1 00:09:08.531 --rc geninfo_all_blocks=1 00:09:08.531 --rc geninfo_unexecuted_blocks=1 00:09:08.531 00:09:08.531 ' 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:08.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.531 --rc genhtml_branch_coverage=1 00:09:08.531 --rc genhtml_function_coverage=1 00:09:08.531 --rc genhtml_legend=1 00:09:08.531 --rc geninfo_all_blocks=1 00:09:08.531 --rc geninfo_unexecuted_blocks=1 00:09:08.531 00:09:08.531 ' 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:08.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.531 --rc genhtml_branch_coverage=1 00:09:08.531 --rc genhtml_function_coverage=1 00:09:08.531 --rc genhtml_legend=1 00:09:08.531 --rc geninfo_all_blocks=1 00:09:08.531 --rc geninfo_unexecuted_blocks=1 00:09:08.531 00:09:08.531 ' 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.531 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:08.791 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # nvmf_veth_init 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:08.791 Cannot find device "nvmf_init_br" 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:08.791 Cannot find device "nvmf_init_br2" 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:08.791 Cannot find device "nvmf_tgt_br" 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:08.791 Cannot find device "nvmf_tgt_br2" 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:08.791 Cannot find device "nvmf_init_br" 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:08.791 Cannot find device "nvmf_init_br2" 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:08.791 Cannot find device "nvmf_tgt_br" 00:09:08.791 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:08.792 Cannot find device "nvmf_tgt_br2" 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:08.792 Cannot find device "nvmf_br" 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:08.792 Cannot find device "nvmf_init_if" 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:08.792 Cannot find device "nvmf_init_if2" 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:08.792 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:08.792 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:08.792 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:09.051 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:09.051 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:09:09.051 00:09:09.051 --- 10.0.0.3 ping statistics --- 00:09:09.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.051 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:09.051 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:09.051 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:09:09.051 00:09:09.051 --- 10.0.0.4 ping statistics --- 00:09:09.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.051 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:09.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:09.051 00:09:09.051 --- 10.0.0.1 ping statistics --- 00:09:09.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.051 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:09.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:09:09.051 00:09:09.051 --- 10.0.0.2 ping statistics --- 00:09:09.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.051 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # return 0 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:09.051 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # nvmfpid=65232 00:09:09.052 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:09.052 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # waitforlisten 65232 00:09:09.052 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 65232 ']' 00:09:09.052 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.052 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:09.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.052 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.052 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:09.052 08:20:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:09.052 [2024-10-15 08:20:10.759065] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:09:09.052 [2024-10-15 08:20:10.759204] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.310 [2024-10-15 08:20:10.902834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.310 [2024-10-15 08:20:10.991555] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.310 [2024-10-15 08:20:10.991627] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.310 [2024-10-15 08:20:10.991641] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.310 [2024-10-15 08:20:10.991652] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.310 [2024-10-15 08:20:10.991662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.310 [2024-10-15 08:20:10.993141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.310 [2024-10-15 08:20:10.993246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.310 [2024-10-15 08:20:10.993403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.310 [2024-10-15 08:20:10.993410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.568 [2024-10-15 08:20:11.069764] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:09.568 08:20:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.568 08:20:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:09:09.568 08:20:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:09.568 08:20:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:09.568 08:20:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:09.568 08:20:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.568 08:20:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:09.826 [2024-10-15 08:20:11.435075] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.826 08:20:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:10.083 Malloc0 00:09:10.083 08:20:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:10.343 08:20:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:10.909 08:20:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:10.909 [2024-10-15 08:20:12.604960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:10.909 08:20:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:11.168 [2024-10-15 08:20:12.869158] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:11.168 08:20:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:11.427 08:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:11.686 08:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:11.686 08:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:09:11.686 08:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:11.686 08:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:11.686 08:20:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:09:13.590 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:13.590 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:13.590 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.590 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:13.590 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.590 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:09:13.590 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:13.590 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:13.590 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:13.590 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:13.590 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:13.590 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:13.590 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:13.590 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:13.591 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:13.591 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:13.591 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:13.591 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:13.591 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:13.591 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:13.591 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:13.591 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:13.591 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:13.591 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:13.591 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:13.591 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:13.591 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:13.591 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:13.591 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:13.591 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:13.591 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:13.591 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:13.591 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=65320 00:09:13.591 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:13.591 08:20:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:13.591 [global] 00:09:13.591 thread=1 00:09:13.591 invalidate=1 00:09:13.591 rw=randrw 00:09:13.591 time_based=1 00:09:13.591 runtime=6 00:09:13.591 ioengine=libaio 00:09:13.591 direct=1 00:09:13.591 bs=4096 00:09:13.591 iodepth=128 00:09:13.591 norandommap=0 00:09:13.591 numjobs=1 00:09:13.591 00:09:13.591 verify_dump=1 00:09:13.591 verify_backlog=512 00:09:13.591 verify_state_save=0 00:09:13.591 do_verify=1 00:09:13.591 verify=crc32c-intel 00:09:13.591 [job0] 00:09:13.591 filename=/dev/nvme0n1 00:09:13.591 Could not set queue depth (nvme0n1) 00:09:13.916 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:13.916 fio-3.35 00:09:13.916 Starting 1 thread 00:09:14.487 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:15.054 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:15.312 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:15.312 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:15.312 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:15.312 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:15.312 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:15.312 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:15.312 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:15.312 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:15.312 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:15.312 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:15.312 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:15.312 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:15.313 08:20:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:15.571 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:15.830 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:15.830 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:15.830 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:15.830 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:15.830 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:15.830 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:15.830 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:15.830 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:15.830 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:15.830 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:15.830 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:15.830 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:15.830 08:20:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 65320 00:09:20.018 00:09:20.019 job0: (groupid=0, jobs=1): err= 0: pid=65341: Tue Oct 15 08:20:21 2024 00:09:20.019 read: IOPS=10.3k, BW=40.4MiB/s (42.4MB/s)(242MiB/6002msec) 00:09:20.019 slat (usec): min=2, max=5913, avg=55.93, stdev=230.30 00:09:20.019 clat (usec): min=1152, max=16107, avg=8398.53, stdev=1562.28 00:09:20.019 lat (usec): min=1912, max=16224, avg=8454.46, stdev=1568.25 00:09:20.019 clat percentiles (usec): 00:09:20.019 | 1.00th=[ 4359], 5.00th=[ 6063], 10.00th=[ 7111], 20.00th=[ 7635], 00:09:20.019 | 30.00th=[ 7832], 40.00th=[ 8094], 50.00th=[ 8225], 60.00th=[ 8455], 00:09:20.019 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9634], 95.00th=[12256], 00:09:20.019 | 99.00th=[13304], 99.50th=[13566], 99.90th=[14484], 99.95th=[15008], 00:09:20.019 | 99.99th=[15795] 00:09:20.019 bw ( KiB/s): min= 9968, max=25400, per=52.62%, avg=21769.45, stdev=4467.46, samples=11 00:09:20.019 iops : min= 2492, max= 6350, avg=5442.36, stdev=1116.86, samples=11 00:09:20.019 write: IOPS=5981, BW=23.4MiB/s (24.5MB/s)(129MiB/5515msec); 0 zone resets 00:09:20.019 slat (usec): min=4, max=5562, avg=65.93, stdev=151.86 00:09:20.019 clat (usec): min=1291, max=14984, avg=7238.59, stdev=1342.05 00:09:20.019 lat (usec): min=1348, max=15370, avg=7304.52, stdev=1346.04 00:09:20.019 clat percentiles (usec): 00:09:20.019 | 1.00th=[ 3425], 5.00th=[ 4293], 10.00th=[ 5276], 20.00th=[ 6718], 00:09:20.019 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7439], 60.00th=[ 7635], 00:09:20.019 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8291], 95.00th=[ 8586], 00:09:20.019 | 99.00th=[11600], 99.50th=[12125], 99.90th=[13829], 99.95th=[14222], 00:09:20.019 | 99.99th=[14877] 00:09:20.019 bw ( KiB/s): min=10416, max=26496, per=90.87%, avg=21743.27, stdev=4320.98, samples=11 00:09:20.019 iops : min= 2604, max= 6624, avg=5435.82, stdev=1080.24, samples=11 00:09:20.019 lat (msec) : 2=0.01%, 4=1.58%, 10=92.00%, 20=6.41% 00:09:20.019 cpu : usr=5.75%, sys=23.38%, ctx=5437, majf=0, minf=54 00:09:20.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:20.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:20.019 issued rwts: total=62079,32989,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.019 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:20.019 00:09:20.019 Run status group 0 (all jobs): 00:09:20.019 READ: bw=40.4MiB/s (42.4MB/s), 40.4MiB/s-40.4MiB/s (42.4MB/s-42.4MB/s), io=242MiB (254MB), run=6002-6002msec 00:09:20.019 WRITE: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=129MiB (135MB), run=5515-5515msec 00:09:20.019 00:09:20.019 Disk stats (read/write): 00:09:20.019 nvme0n1: ios=61059/32477, merge=0/0, ticks=488701/218845, in_queue=707546, util=98.60% 00:09:20.019 08:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:20.278 08:20:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:09:20.536 08:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:20.536 08:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:20.536 08:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:20.536 08:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:20.536 08:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:20.536 08:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:20.536 08:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:20.536 08:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:20.536 08:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:20.537 08:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:20.537 08:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:20.537 08:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:20.537 08:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:20.537 08:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:20.537 08:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65424 00:09:20.537 08:20:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:20.537 [global] 00:09:20.537 thread=1 00:09:20.537 invalidate=1 00:09:20.537 rw=randrw 00:09:20.537 time_based=1 00:09:20.537 runtime=6 00:09:20.537 ioengine=libaio 00:09:20.537 direct=1 00:09:20.537 bs=4096 00:09:20.537 iodepth=128 00:09:20.537 norandommap=0 00:09:20.537 numjobs=1 00:09:20.537 00:09:20.537 verify_dump=1 00:09:20.537 verify_backlog=512 00:09:20.537 verify_state_save=0 00:09:20.537 do_verify=1 00:09:20.537 verify=crc32c-intel 00:09:20.537 [job0] 00:09:20.537 filename=/dev/nvme0n1 00:09:20.537 Could not set queue depth (nvme0n1) 00:09:20.796 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:20.796 fio-3.35 00:09:20.796 Starting 1 thread 00:09:21.739 08:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:21.998 08:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:22.256 08:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:22.256 08:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:22.256 08:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:22.256 08:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:22.256 08:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:22.256 08:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:22.256 08:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:22.256 08:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:22.256 08:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:22.256 08:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:22.256 08:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:22.256 08:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:22.256 08:20:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:22.514 08:20:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:22.773 08:20:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:22.773 08:20:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:22.773 08:20:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:22.773 08:20:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:22.773 08:20:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:22.773 08:20:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:22.773 08:20:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:22.773 08:20:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:22.773 08:20:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:22.773 08:20:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:22.773 08:20:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:22.773 08:20:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:22.773 08:20:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65424 00:09:26.969 00:09:26.969 job0: (groupid=0, jobs=1): err= 0: pid=65449: Tue Oct 15 08:20:28 2024 00:09:26.969 read: IOPS=11.7k, BW=45.6MiB/s (47.8MB/s)(274MiB/6006msec) 00:09:26.969 slat (usec): min=5, max=6296, avg=42.58, stdev=178.34 00:09:26.969 clat (usec): min=298, max=16871, avg=7567.23, stdev=1929.47 00:09:26.969 lat (usec): min=312, max=16951, avg=7609.81, stdev=1943.29 00:09:26.969 clat percentiles (usec): 00:09:26.969 | 1.00th=[ 2999], 5.00th=[ 4228], 10.00th=[ 4817], 20.00th=[ 5932], 00:09:26.969 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 7898], 60.00th=[ 8160], 00:09:26.969 | 70.00th=[ 8356], 80.00th=[ 8586], 90.00th=[ 9110], 95.00th=[11338], 00:09:26.969 | 99.00th=[13042], 99.50th=[13304], 99.90th=[13829], 99.95th=[13960], 00:09:26.969 | 99.99th=[15664] 00:09:26.969 bw ( KiB/s): min=11704, max=38360, per=52.65%, avg=24597.09, stdev=8334.49, samples=11 00:09:26.969 iops : min= 2926, max= 9590, avg=6149.27, stdev=2083.62, samples=11 00:09:26.969 write: IOPS=6897, BW=26.9MiB/s (28.2MB/s)(143MiB/5302msec); 0 zone resets 00:09:26.969 slat (usec): min=14, max=1656, avg=55.34, stdev=118.37 00:09:26.969 clat (usec): min=996, max=14858, avg=6335.35, stdev=1799.61 00:09:26.969 lat (usec): min=1024, max=14907, avg=6390.69, stdev=1812.40 00:09:26.969 clat percentiles (usec): 00:09:26.969 | 1.00th=[ 2507], 5.00th=[ 3359], 10.00th=[ 3785], 20.00th=[ 4424], 00:09:26.969 | 30.00th=[ 5145], 40.00th=[ 6259], 50.00th=[ 6915], 60.00th=[ 7242], 00:09:26.969 | 70.00th=[ 7504], 80.00th=[ 7767], 90.00th=[ 8094], 95.00th=[ 8455], 00:09:26.969 | 99.00th=[11207], 99.50th=[11731], 99.90th=[13173], 99.95th=[13435], 00:09:26.969 | 99.99th=[14091] 00:09:26.969 bw ( KiB/s): min=12288, max=38352, per=89.32%, avg=24641.45, stdev=8121.77, samples=11 00:09:26.969 iops : min= 3072, max= 9588, avg=6160.36, stdev=2030.44, samples=11 00:09:26.969 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:26.969 lat (msec) : 2=0.27%, 4=6.61%, 10=88.66%, 20=4.43% 00:09:26.969 cpu : usr=6.14%, sys=26.61%, ctx=6369, majf=0, minf=108 00:09:26.969 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:09:26.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.969 issued rwts: total=70142,36569,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.969 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.969 00:09:26.969 Run status group 0 (all jobs): 00:09:26.969 READ: bw=45.6MiB/s (47.8MB/s), 45.6MiB/s-45.6MiB/s (47.8MB/s-47.8MB/s), io=274MiB (287MB), run=6006-6006msec 00:09:26.969 WRITE: bw=26.9MiB/s (28.2MB/s), 26.9MiB/s-26.9MiB/s (28.2MB/s-28.2MB/s), io=143MiB (150MB), run=5302-5302msec 00:09:26.969 00:09:26.969 Disk stats (read/write): 00:09:26.969 nvme0n1: ios=69237/35968, merge=0/0, ticks=494388/207515, in_queue=701903, util=98.68% 00:09:26.969 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:26.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:26.970 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:26.970 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:26.970 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.970 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:26.970 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:26.970 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.970 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:26.970 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:27.226 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:27.226 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:27.226 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:27.226 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:27.226 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:27.226 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:27.226 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:27.226 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:27.226 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:27.226 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:27.226 rmmod nvme_tcp 00:09:27.226 rmmod nvme_fabrics 00:09:27.226 rmmod nvme_keyring 00:09:27.484 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:27.484 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:27.484 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:27.484 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n 65232 ']' 00:09:27.484 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # killprocess 65232 00:09:27.484 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 65232 ']' 00:09:27.484 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 65232 00:09:27.484 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:09:27.484 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:27.484 08:20:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65232 00:09:27.484 killing process with pid 65232 00:09:27.484 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:27.484 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:27.484 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65232' 00:09:27.484 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 65232 00:09:27.484 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 65232 00:09:27.743 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:27.743 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:27.743 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:27.743 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:27.743 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:27.743 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:27.743 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:27.743 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:27.743 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:27.743 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:27.743 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:27.743 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:27.743 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:27.743 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:27.743 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:27.743 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:27.743 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:27.743 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:27.743 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:27.743 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:28.003 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:28.003 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:28.003 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:28.003 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.003 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.003 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.003 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:09:28.003 00:09:28.003 real 0m19.516s 00:09:28.003 user 1m12.089s 00:09:28.003 sys 0m10.291s 00:09:28.003 ************************************ 00:09:28.003 END TEST nvmf_target_multipath 00:09:28.003 ************************************ 00:09:28.003 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.003 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:28.003 08:20:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:28.003 08:20:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:28.003 08:20:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.003 08:20:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:28.003 ************************************ 00:09:28.003 START TEST nvmf_zcopy 00:09:28.003 ************************************ 00:09:28.003 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:28.003 * Looking for test storage... 00:09:28.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:28.003 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:28.003 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:09:28.003 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:28.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.263 --rc genhtml_branch_coverage=1 00:09:28.263 --rc genhtml_function_coverage=1 00:09:28.263 --rc genhtml_legend=1 00:09:28.263 --rc geninfo_all_blocks=1 00:09:28.263 --rc geninfo_unexecuted_blocks=1 00:09:28.263 00:09:28.263 ' 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:28.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.263 --rc genhtml_branch_coverage=1 00:09:28.263 --rc genhtml_function_coverage=1 00:09:28.263 --rc genhtml_legend=1 00:09:28.263 --rc geninfo_all_blocks=1 00:09:28.263 --rc geninfo_unexecuted_blocks=1 00:09:28.263 00:09:28.263 ' 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:28.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.263 --rc genhtml_branch_coverage=1 00:09:28.263 --rc genhtml_function_coverage=1 00:09:28.263 --rc genhtml_legend=1 00:09:28.263 --rc geninfo_all_blocks=1 00:09:28.263 --rc geninfo_unexecuted_blocks=1 00:09:28.263 00:09:28.263 ' 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:28.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.263 --rc genhtml_branch_coverage=1 00:09:28.263 --rc genhtml_function_coverage=1 00:09:28.263 --rc genhtml_legend=1 00:09:28.263 --rc geninfo_all_blocks=1 00:09:28.263 --rc geninfo_unexecuted_blocks=1 00:09:28.263 00:09:28.263 ' 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:28.263 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # nvmf_veth_init 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:28.263 Cannot find device "nvmf_init_br" 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:28.263 Cannot find device "nvmf_init_br2" 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:28.263 Cannot find device "nvmf_tgt_br" 00:09:28.263 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:09:28.264 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:28.264 Cannot find device "nvmf_tgt_br2" 00:09:28.264 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:09:28.264 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:28.264 Cannot find device "nvmf_init_br" 00:09:28.264 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:09:28.264 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:28.264 Cannot find device "nvmf_init_br2" 00:09:28.264 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:09:28.264 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:28.264 Cannot find device "nvmf_tgt_br" 00:09:28.264 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:09:28.264 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:28.264 Cannot find device "nvmf_tgt_br2" 00:09:28.264 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:09:28.264 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:28.264 Cannot find device "nvmf_br" 00:09:28.264 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:09:28.264 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:28.264 Cannot find device "nvmf_init_if" 00:09:28.264 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:09:28.264 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:28.264 Cannot find device "nvmf_init_if2" 00:09:28.264 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:09:28.264 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:28.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:28.264 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:09:28.264 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:28.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:28.264 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:09:28.264 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:28.523 08:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:28.523 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:28.523 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:09:28.523 00:09:28.523 --- 10.0.0.3 ping statistics --- 00:09:28.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.523 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:09:28.523 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:28.523 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:28.524 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:09:28.524 00:09:28.524 --- 10.0.0.4 ping statistics --- 00:09:28.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.524 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:09:28.524 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:28.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:28.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:28.524 00:09:28.524 --- 10.0.0.1 ping statistics --- 00:09:28.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.524 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:28.524 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:28.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:28.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:09:28.524 00:09:28.524 --- 10.0.0.2 ping statistics --- 00:09:28.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.524 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:28.524 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:28.524 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # return 0 00:09:28.524 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:28.524 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:28.524 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:28.524 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:28.524 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:28.524 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:28.524 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:28.782 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:28.782 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:28.782 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:28.782 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.782 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=65755 00:09:28.782 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:28.782 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 65755 00:09:28.782 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 65755 ']' 00:09:28.782 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.782 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:28.782 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.782 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:28.782 08:20:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:28.782 [2024-10-15 08:20:30.330078] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:09:28.782 [2024-10-15 08:20:30.330510] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.782 [2024-10-15 08:20:30.473626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.041 [2024-10-15 08:20:30.561808] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.041 [2024-10-15 08:20:30.562214] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.041 [2024-10-15 08:20:30.562393] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.041 [2024-10-15 08:20:30.562565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.041 [2024-10-15 08:20:30.562584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.041 [2024-10-15 08:20:30.563255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.041 [2024-10-15 08:20:30.641741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.976 [2024-10-15 08:20:31.428408] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.976 [2024-10-15 08:20:31.448568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.976 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:29.977 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.977 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.977 malloc0 00:09:29.977 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.977 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:29.977 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.977 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.977 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.977 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:29.977 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:29.977 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:29.977 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:29.977 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:29.977 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:29.977 { 00:09:29.977 "params": { 00:09:29.977 "name": "Nvme$subsystem", 00:09:29.977 "trtype": "$TEST_TRANSPORT", 00:09:29.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:29.977 "adrfam": "ipv4", 00:09:29.977 "trsvcid": "$NVMF_PORT", 00:09:29.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:29.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:29.977 "hdgst": ${hdgst:-false}, 00:09:29.977 "ddgst": ${ddgst:-false} 00:09:29.977 }, 00:09:29.977 "method": "bdev_nvme_attach_controller" 00:09:29.977 } 00:09:29.977 EOF 00:09:29.977 )") 00:09:29.977 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:29.977 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:29.977 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:29.977 08:20:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:29.977 "params": { 00:09:29.977 "name": "Nvme1", 00:09:29.977 "trtype": "tcp", 00:09:29.977 "traddr": "10.0.0.3", 00:09:29.977 "adrfam": "ipv4", 00:09:29.977 "trsvcid": "4420", 00:09:29.977 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:29.977 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:29.977 "hdgst": false, 00:09:29.977 "ddgst": false 00:09:29.977 }, 00:09:29.977 "method": "bdev_nvme_attach_controller" 00:09:29.977 }' 00:09:29.977 [2024-10-15 08:20:31.580891] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:09:29.977 [2024-10-15 08:20:31.581571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65788 ] 00:09:30.236 [2024-10-15 08:20:31.720551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.236 [2024-10-15 08:20:31.805410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.236 [2024-10-15 08:20:31.887736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:30.496 Running I/O for 10 seconds... 00:09:32.374 5441.00 IOPS, 42.51 MiB/s [2024-10-15T08:20:35.039Z] 5551.50 IOPS, 43.37 MiB/s [2024-10-15T08:20:36.415Z] 5627.00 IOPS, 43.96 MiB/s [2024-10-15T08:20:37.352Z] 5666.25 IOPS, 44.27 MiB/s [2024-10-15T08:20:38.287Z] 5657.80 IOPS, 44.20 MiB/s [2024-10-15T08:20:39.221Z] 5672.83 IOPS, 44.32 MiB/s [2024-10-15T08:20:40.156Z] 5686.71 IOPS, 44.43 MiB/s [2024-10-15T08:20:41.092Z] 5700.25 IOPS, 44.53 MiB/s [2024-10-15T08:20:42.468Z] 5712.56 IOPS, 44.63 MiB/s [2024-10-15T08:20:42.468Z] 5723.50 IOPS, 44.71 MiB/s 00:09:40.737 Latency(us) 00:09:40.737 [2024-10-15T08:20:42.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.737 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:40.737 Verification LBA range: start 0x0 length 0x1000 00:09:40.737 Nvme1n1 : 10.02 5726.13 44.74 0.00 0.00 22283.91 3425.75 32887.16 00:09:40.737 [2024-10-15T08:20:42.468Z] =================================================================================================================== 00:09:40.737 [2024-10-15T08:20:42.468Z] Total : 5726.13 44.74 0.00 0.00 22283.91 3425.75 32887.16 00:09:40.737 08:20:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65905 00:09:40.737 08:20:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:40.737 08:20:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:40.737 08:20:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:40.737 08:20:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:40.737 08:20:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:40.737 08:20:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:40.737 08:20:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:40.737 { 00:09:40.737 "params": { 00:09:40.737 "name": "Nvme$subsystem", 00:09:40.737 "trtype": "$TEST_TRANSPORT", 00:09:40.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:40.737 "adrfam": "ipv4", 00:09:40.737 "trsvcid": "$NVMF_PORT", 00:09:40.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:40.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:40.737 "hdgst": ${hdgst:-false}, 00:09:40.737 "ddgst": ${ddgst:-false} 00:09:40.737 }, 00:09:40.737 "method": "bdev_nvme_attach_controller" 00:09:40.737 } 00:09:40.737 EOF 00:09:40.737 )") 00:09:40.737 08:20:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:40.737 08:20:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:40.737 [2024-10-15 08:20:42.325487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.737 [2024-10-15 08:20:42.325541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.737 08:20:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:40.737 08:20:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:40.737 08:20:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:40.737 "params": { 00:09:40.737 "name": "Nvme1", 00:09:40.737 "trtype": "tcp", 00:09:40.737 "traddr": "10.0.0.3", 00:09:40.737 "adrfam": "ipv4", 00:09:40.737 "trsvcid": "4420", 00:09:40.737 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:40.737 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:40.737 "hdgst": false, 00:09:40.737 "ddgst": false 00:09:40.737 }, 00:09:40.737 "method": "bdev_nvme_attach_controller" 00:09:40.737 }' 00:09:40.737 [2024-10-15 08:20:42.333438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.737 [2024-10-15 08:20:42.333619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.737 [2024-10-15 08:20:42.341429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.737 [2024-10-15 08:20:42.341462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.737 [2024-10-15 08:20:42.349430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.737 [2024-10-15 08:20:42.349462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.737 [2024-10-15 08:20:42.357453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.737 [2024-10-15 08:20:42.357490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.737 [2024-10-15 08:20:42.369434] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.737 [2024-10-15 08:20:42.369466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.737 [2024-10-15 08:20:42.381444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.737 [2024-10-15 08:20:42.381474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.737 [2024-10-15 08:20:42.387464] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:09:40.737 [2024-10-15 08:20:42.387556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65905 ] 00:09:40.737 [2024-10-15 08:20:42.393448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.737 [2024-10-15 08:20:42.393480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.737 [2024-10-15 08:20:42.405443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.737 [2024-10-15 08:20:42.405476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.737 [2024-10-15 08:20:42.417452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.737 [2024-10-15 08:20:42.417485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.738 [2024-10-15 08:20:42.429463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.738 [2024-10-15 08:20:42.429495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.738 [2024-10-15 08:20:42.441466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.738 [2024-10-15 08:20:42.441497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.738 [2024-10-15 08:20:42.453506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.738 [2024-10-15 08:20:42.453551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.738 [2024-10-15 08:20:42.465477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.738 [2024-10-15 08:20:42.465509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.477471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.477501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.489506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.489540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.501494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.501522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.513484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.513511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.525484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.525511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.528368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.997 [2024-10-15 08:20:42.533479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.533506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.545497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.545537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.553488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.553518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.561486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.561513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.573498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.573532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.585490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.585519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.597512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.597543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.609509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.609543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.610920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.997 [2024-10-15 08:20:42.621503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.621530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.633527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.633569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.645537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.645577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.657538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.657578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.665536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.665569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.673534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.673572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.681534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.681571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.689529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.689561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.691340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:40.997 [2024-10-15 08:20:42.697530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.697560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.705534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.705565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.713534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.713565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.997 [2024-10-15 08:20:42.721536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.997 [2024-10-15 08:20:42.721568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.255 [2024-10-15 08:20:42.729530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.255 [2024-10-15 08:20:42.729557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.255 [2024-10-15 08:20:42.741532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.255 [2024-10-15 08:20:42.741560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.255 [2024-10-15 08:20:42.749555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.255 [2024-10-15 08:20:42.749592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.255 [2024-10-15 08:20:42.757555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.255 [2024-10-15 08:20:42.757588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.255 [2024-10-15 08:20:42.765561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.255 [2024-10-15 08:20:42.765593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.255 [2024-10-15 08:20:42.773596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.255 [2024-10-15 08:20:42.773631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.255 [2024-10-15 08:20:42.781581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.255 [2024-10-15 08:20:42.781613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.255 [2024-10-15 08:20:42.789590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.255 [2024-10-15 08:20:42.789622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.255 [2024-10-15 08:20:42.797597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.255 [2024-10-15 08:20:42.797630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.255 [2024-10-15 08:20:42.805600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.255 [2024-10-15 08:20:42.805628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.255 [2024-10-15 08:20:42.813612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.255 [2024-10-15 08:20:42.813646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.255 Running I/O for 5 seconds... 00:09:41.255 [2024-10-15 08:20:42.825616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.255 [2024-10-15 08:20:42.825646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.255 [2024-10-15 08:20:42.840707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.255 [2024-10-15 08:20:42.840744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.255 [2024-10-15 08:20:42.851651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.255 [2024-10-15 08:20:42.851692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.255 [2024-10-15 08:20:42.868384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.256 [2024-10-15 08:20:42.868431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.256 [2024-10-15 08:20:42.885065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.256 [2024-10-15 08:20:42.885122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.256 [2024-10-15 08:20:42.903587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.256 [2024-10-15 08:20:42.903652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.256 [2024-10-15 08:20:42.915333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.256 [2024-10-15 08:20:42.915369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.256 [2024-10-15 08:20:42.927160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.256 [2024-10-15 08:20:42.927206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.256 [2024-10-15 08:20:42.942471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.256 [2024-10-15 08:20:42.942514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.256 [2024-10-15 08:20:42.957795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.256 [2024-10-15 08:20:42.957833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.256 [2024-10-15 08:20:42.973596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.256 [2024-10-15 08:20:42.973632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.514 [2024-10-15 08:20:42.989940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.514 [2024-10-15 08:20:42.989977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.514 [2024-10-15 08:20:42.999723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.514 [2024-10-15 08:20:42.999759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.514 [2024-10-15 08:20:43.016161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.514 [2024-10-15 08:20:43.016211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.514 [2024-10-15 08:20:43.026326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.514 [2024-10-15 08:20:43.026360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.514 [2024-10-15 08:20:43.042381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.514 [2024-10-15 08:20:43.042433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.514 [2024-10-15 08:20:43.056182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.514 [2024-10-15 08:20:43.056217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.514 [2024-10-15 08:20:43.066121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.514 [2024-10-15 08:20:43.066184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.514 [2024-10-15 08:20:43.078437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.514 [2024-10-15 08:20:43.078488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.514 [2024-10-15 08:20:43.093604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.514 [2024-10-15 08:20:43.093642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.514 [2024-10-15 08:20:43.109986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.514 [2024-10-15 08:20:43.110052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.514 [2024-10-15 08:20:43.120384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.514 [2024-10-15 08:20:43.120419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.514 [2024-10-15 08:20:43.132077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.514 [2024-10-15 08:20:43.132128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.514 [2024-10-15 08:20:43.143528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.514 [2024-10-15 08:20:43.143575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.514 [2024-10-15 08:20:43.158996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.514 [2024-10-15 08:20:43.159060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.514 [2024-10-15 08:20:43.169068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.514 [2024-10-15 08:20:43.169119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.514 [2024-10-15 08:20:43.184778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.514 [2024-10-15 08:20:43.184845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.514 [2024-10-15 08:20:43.200629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.514 [2024-10-15 08:20:43.200684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.514 [2024-10-15 08:20:43.210523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.514 [2024-10-15 08:20:43.210558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.514 [2024-10-15 08:20:43.225922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.514 [2024-10-15 08:20:43.226000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.514 [2024-10-15 08:20:43.241760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.514 [2024-10-15 08:20:43.241798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.772 [2024-10-15 08:20:43.252028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.772 [2024-10-15 08:20:43.252115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.772 [2024-10-15 08:20:43.264323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.772 [2024-10-15 08:20:43.264358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.772 [2024-10-15 08:20:43.275565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.772 [2024-10-15 08:20:43.275601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.772 [2024-10-15 08:20:43.292134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.772 [2024-10-15 08:20:43.292233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.772 [2024-10-15 08:20:43.308693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.772 [2024-10-15 08:20:43.308732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.772 [2024-10-15 08:20:43.318996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.772 [2024-10-15 08:20:43.319050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.772 [2024-10-15 08:20:43.331322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.772 [2024-10-15 08:20:43.331360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.772 [2024-10-15 08:20:43.342917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.772 [2024-10-15 08:20:43.342953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.772 [2024-10-15 08:20:43.356653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.772 [2024-10-15 08:20:43.356689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.772 [2024-10-15 08:20:43.373196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.772 [2024-10-15 08:20:43.373232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.772 [2024-10-15 08:20:43.389616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.772 [2024-10-15 08:20:43.389651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.772 [2024-10-15 08:20:43.399515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.772 [2024-10-15 08:20:43.399567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.772 [2024-10-15 08:20:43.415640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.772 [2024-10-15 08:20:43.415676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.772 [2024-10-15 08:20:43.425275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.772 [2024-10-15 08:20:43.425308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.772 [2024-10-15 08:20:43.440655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.772 [2024-10-15 08:20:43.440731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.772 [2024-10-15 08:20:43.450755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.772 [2024-10-15 08:20:43.450808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.772 [2024-10-15 08:20:43.463410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.772 [2024-10-15 08:20:43.463462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.772 [2024-10-15 08:20:43.474672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.772 [2024-10-15 08:20:43.474709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.772 [2024-10-15 08:20:43.490685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.772 [2024-10-15 08:20:43.490721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.030 [2024-10-15 08:20:43.506573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.030 [2024-10-15 08:20:43.506611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.030 [2024-10-15 08:20:43.523544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.030 [2024-10-15 08:20:43.523594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.030 [2024-10-15 08:20:43.541498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.030 [2024-10-15 08:20:43.541536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.030 [2024-10-15 08:20:43.556937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.030 [2024-10-15 08:20:43.556990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.030 [2024-10-15 08:20:43.576760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.030 [2024-10-15 08:20:43.576797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.030 [2024-10-15 08:20:43.588146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.030 [2024-10-15 08:20:43.588193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.030 [2024-10-15 08:20:43.603327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.030 [2024-10-15 08:20:43.603362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.030 [2024-10-15 08:20:43.618784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.030 [2024-10-15 08:20:43.618827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.030 [2024-10-15 08:20:43.628430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.030 [2024-10-15 08:20:43.628465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.030 [2024-10-15 08:20:43.643121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.031 [2024-10-15 08:20:43.643168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.031 [2024-10-15 08:20:43.654181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.031 [2024-10-15 08:20:43.654216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.031 [2024-10-15 08:20:43.667980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.031 [2024-10-15 08:20:43.668015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.031 [2024-10-15 08:20:43.685387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.031 [2024-10-15 08:20:43.685431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.031 [2024-10-15 08:20:43.700852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.031 [2024-10-15 08:20:43.700889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.031 [2024-10-15 08:20:43.710342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.031 [2024-10-15 08:20:43.710379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.031 [2024-10-15 08:20:43.726589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.031 [2024-10-15 08:20:43.726625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.031 [2024-10-15 08:20:43.744516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.031 [2024-10-15 08:20:43.744568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.031 [2024-10-15 08:20:43.759158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.031 [2024-10-15 08:20:43.759204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.289 [2024-10-15 08:20:43.768590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.289 [2024-10-15 08:20:43.768625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.289 [2024-10-15 08:20:43.780739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.289 [2024-10-15 08:20:43.780775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.289 [2024-10-15 08:20:43.792011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.289 [2024-10-15 08:20:43.792047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.289 [2024-10-15 08:20:43.804922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.289 [2024-10-15 08:20:43.804958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.290 11138.00 IOPS, 87.02 MiB/s [2024-10-15T08:20:44.021Z] [2024-10-15 08:20:43.824534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.290 [2024-10-15 08:20:43.824571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.290 [2024-10-15 08:20:43.839710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.290 [2024-10-15 08:20:43.839749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.290 [2024-10-15 08:20:43.849677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.290 [2024-10-15 08:20:43.849712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.290 [2024-10-15 08:20:43.864920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.290 [2024-10-15 08:20:43.864955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.290 [2024-10-15 08:20:43.880294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.290 [2024-10-15 08:20:43.880329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.290 [2024-10-15 08:20:43.890176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.290 [2024-10-15 08:20:43.890210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.290 [2024-10-15 08:20:43.901968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.290 [2024-10-15 08:20:43.902002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.290 [2024-10-15 08:20:43.913029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.290 [2024-10-15 08:20:43.913065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.290 [2024-10-15 08:20:43.926107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.290 [2024-10-15 08:20:43.926158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.290 [2024-10-15 08:20:43.944162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.290 [2024-10-15 08:20:43.944200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.290 [2024-10-15 08:20:43.959068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.290 [2024-10-15 08:20:43.959105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.290 [2024-10-15 08:20:43.969011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.290 [2024-10-15 08:20:43.969047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.290 [2024-10-15 08:20:43.983086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.290 [2024-10-15 08:20:43.983137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.290 [2024-10-15 08:20:43.997799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.290 [2024-10-15 08:20:43.997836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.290 [2024-10-15 08:20:44.013853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.290 [2024-10-15 08:20:44.013905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.548 [2024-10-15 08:20:44.024310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.548 [2024-10-15 08:20:44.024346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.548 [2024-10-15 08:20:44.039005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.548 [2024-10-15 08:20:44.039040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.548 [2024-10-15 08:20:44.049739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.548 [2024-10-15 08:20:44.049781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.548 [2024-10-15 08:20:44.064712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.548 [2024-10-15 08:20:44.064746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.548 [2024-10-15 08:20:44.080737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.548 [2024-10-15 08:20:44.080774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.548 [2024-10-15 08:20:44.090677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.548 [2024-10-15 08:20:44.090713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.548 [2024-10-15 08:20:44.102355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.548 [2024-10-15 08:20:44.102391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.548 [2024-10-15 08:20:44.113930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.548 [2024-10-15 08:20:44.113965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.548 [2024-10-15 08:20:44.131637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.548 [2024-10-15 08:20:44.131688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.548 [2024-10-15 08:20:44.147876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.548 [2024-10-15 08:20:44.147913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.548 [2024-10-15 08:20:44.157671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.548 [2024-10-15 08:20:44.157707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.548 [2024-10-15 08:20:44.173872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.548 [2024-10-15 08:20:44.173908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.548 [2024-10-15 08:20:44.183752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.548 [2024-10-15 08:20:44.183803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.549 [2024-10-15 08:20:44.198264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.549 [2024-10-15 08:20:44.198306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.549 [2024-10-15 08:20:44.207949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.549 [2024-10-15 08:20:44.207985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.549 [2024-10-15 08:20:44.223834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.549 [2024-10-15 08:20:44.223870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.549 [2024-10-15 08:20:44.239510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.549 [2024-10-15 08:20:44.239546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.549 [2024-10-15 08:20:44.249707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.549 [2024-10-15 08:20:44.249742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.549 [2024-10-15 08:20:44.265033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.549 [2024-10-15 08:20:44.265101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.807 [2024-10-15 08:20:44.281280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.807 [2024-10-15 08:20:44.281315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.807 [2024-10-15 08:20:44.297636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.807 [2024-10-15 08:20:44.297673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.807 [2024-10-15 08:20:44.315888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.807 [2024-10-15 08:20:44.315926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.807 [2024-10-15 08:20:44.330782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.807 [2024-10-15 08:20:44.330818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.807 [2024-10-15 08:20:44.346293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.807 [2024-10-15 08:20:44.346330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.807 [2024-10-15 08:20:44.355899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.807 [2024-10-15 08:20:44.355933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.807 [2024-10-15 08:20:44.367721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.807 [2024-10-15 08:20:44.367757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.807 [2024-10-15 08:20:44.378746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.807 [2024-10-15 08:20:44.378782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.807 [2024-10-15 08:20:44.390068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.807 [2024-10-15 08:20:44.390119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.807 [2024-10-15 08:20:44.405092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.807 [2024-10-15 08:20:44.405188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.807 [2024-10-15 08:20:44.421325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.807 [2024-10-15 08:20:44.421377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.807 [2024-10-15 08:20:44.430979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.807 [2024-10-15 08:20:44.431030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.807 [2024-10-15 08:20:44.447015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.807 [2024-10-15 08:20:44.447050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.807 [2024-10-15 08:20:44.457603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.807 [2024-10-15 08:20:44.457637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.807 [2024-10-15 08:20:44.472699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.807 [2024-10-15 08:20:44.472735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.807 [2024-10-15 08:20:44.483084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.807 [2024-10-15 08:20:44.483136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.807 [2024-10-15 08:20:44.498355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.807 [2024-10-15 08:20:44.498390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.807 [2024-10-15 08:20:44.508547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.807 [2024-10-15 08:20:44.508599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.807 [2024-10-15 08:20:44.523596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.807 [2024-10-15 08:20:44.523634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.066 [2024-10-15 08:20:44.539866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.066 [2024-10-15 08:20:44.539902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.066 [2024-10-15 08:20:44.556468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.066 [2024-10-15 08:20:44.556504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.066 [2024-10-15 08:20:44.567080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.066 [2024-10-15 08:20:44.567127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.066 [2024-10-15 08:20:44.579232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.066 [2024-10-15 08:20:44.579267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.066 [2024-10-15 08:20:44.590960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.066 [2024-10-15 08:20:44.590995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.066 [2024-10-15 08:20:44.607118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.066 [2024-10-15 08:20:44.607179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.066 [2024-10-15 08:20:44.624010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.066 [2024-10-15 08:20:44.624061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.066 [2024-10-15 08:20:44.640538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.066 [2024-10-15 08:20:44.640573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.066 [2024-10-15 08:20:44.650681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.066 [2024-10-15 08:20:44.650718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.066 [2024-10-15 08:20:44.665784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.066 [2024-10-15 08:20:44.665823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.066 [2024-10-15 08:20:44.682158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.066 [2024-10-15 08:20:44.682192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.066 [2024-10-15 08:20:44.692727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.066 [2024-10-15 08:20:44.692786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.066 [2024-10-15 08:20:44.705028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.066 [2024-10-15 08:20:44.705079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.066 [2024-10-15 08:20:44.716088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.066 [2024-10-15 08:20:44.716171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.066 [2024-10-15 08:20:44.732506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.066 [2024-10-15 08:20:44.732541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.066 [2024-10-15 08:20:44.748558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.066 [2024-10-15 08:20:44.748593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.066 [2024-10-15 08:20:44.766372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.066 [2024-10-15 08:20:44.766411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.066 [2024-10-15 08:20:44.776952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.066 [2024-10-15 08:20:44.776988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.066 [2024-10-15 08:20:44.788050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.066 [2024-10-15 08:20:44.788086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.325 [2024-10-15 08:20:44.799610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.325 [2024-10-15 08:20:44.799645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.325 [2024-10-15 08:20:44.815855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.325 [2024-10-15 08:20:44.815891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.325 11224.50 IOPS, 87.69 MiB/s [2024-10-15T08:20:45.056Z] [2024-10-15 08:20:44.831647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.325 [2024-10-15 08:20:44.831682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.325 [2024-10-15 08:20:44.841686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.325 [2024-10-15 08:20:44.841722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.325 [2024-10-15 08:20:44.855468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.325 [2024-10-15 08:20:44.855506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.325 [2024-10-15 08:20:44.866566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.325 [2024-10-15 08:20:44.866601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.325 [2024-10-15 08:20:44.882874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.325 [2024-10-15 08:20:44.882909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.325 [2024-10-15 08:20:44.895093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.325 [2024-10-15 08:20:44.895158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.325 [2024-10-15 08:20:44.904685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.325 [2024-10-15 08:20:44.904720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.326 [2024-10-15 08:20:44.919854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.326 [2024-10-15 08:20:44.919891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.326 [2024-10-15 08:20:44.930819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.326 [2024-10-15 08:20:44.930864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.326 [2024-10-15 08:20:44.945659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.326 [2024-10-15 08:20:44.945694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.326 [2024-10-15 08:20:44.963819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.326 [2024-10-15 08:20:44.963858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.326 [2024-10-15 08:20:44.974706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.326 [2024-10-15 08:20:44.974741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.326 [2024-10-15 08:20:44.985706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.326 [2024-10-15 08:20:44.985741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.326 [2024-10-15 08:20:44.996584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.326 [2024-10-15 08:20:44.996620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.326 [2024-10-15 08:20:45.012583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.326 [2024-10-15 08:20:45.012619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.326 [2024-10-15 08:20:45.028483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.326 [2024-10-15 08:20:45.028519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.326 [2024-10-15 08:20:45.037742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.326 [2024-10-15 08:20:45.037777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.326 [2024-10-15 08:20:45.051155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.326 [2024-10-15 08:20:45.051190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.584 [2024-10-15 08:20:45.062001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.584 [2024-10-15 08:20:45.062037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.584 [2024-10-15 08:20:45.073262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.584 [2024-10-15 08:20:45.073296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.584 [2024-10-15 08:20:45.084125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.584 [2024-10-15 08:20:45.084173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.584 [2024-10-15 08:20:45.100351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.584 [2024-10-15 08:20:45.100386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.584 [2024-10-15 08:20:45.109973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.584 [2024-10-15 08:20:45.110009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.584 [2024-10-15 08:20:45.122290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.584 [2024-10-15 08:20:45.122325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.584 [2024-10-15 08:20:45.133482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.584 [2024-10-15 08:20:45.133517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.584 [2024-10-15 08:20:45.144600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.584 [2024-10-15 08:20:45.144635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.584 [2024-10-15 08:20:45.155660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.584 [2024-10-15 08:20:45.155695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.584 [2024-10-15 08:20:45.168844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.584 [2024-10-15 08:20:45.168880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.584 [2024-10-15 08:20:45.179180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.584 [2024-10-15 08:20:45.179214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.584 [2024-10-15 08:20:45.193870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.584 [2024-10-15 08:20:45.193906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.584 [2024-10-15 08:20:45.209900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.584 [2024-10-15 08:20:45.209935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.584 [2024-10-15 08:20:45.219944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.584 [2024-10-15 08:20:45.219978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.584 [2024-10-15 08:20:45.231702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.584 [2024-10-15 08:20:45.231737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.584 [2024-10-15 08:20:45.242543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.584 [2024-10-15 08:20:45.242578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.584 [2024-10-15 08:20:45.257448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.584 [2024-10-15 08:20:45.257483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.584 [2024-10-15 08:20:45.273991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.584 [2024-10-15 08:20:45.274056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.584 [2024-10-15 08:20:45.290877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.584 [2024-10-15 08:20:45.290927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.584 [2024-10-15 08:20:45.301037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.584 [2024-10-15 08:20:45.301087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.844 [2024-10-15 08:20:45.316157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.844 [2024-10-15 08:20:45.316192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.844 [2024-10-15 08:20:45.332208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.844 [2024-10-15 08:20:45.332242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.844 [2024-10-15 08:20:45.341550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.844 [2024-10-15 08:20:45.341585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.844 [2024-10-15 08:20:45.354585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.844 [2024-10-15 08:20:45.354619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.844 [2024-10-15 08:20:45.364561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.844 [2024-10-15 08:20:45.364593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.844 [2024-10-15 08:20:45.379699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.844 [2024-10-15 08:20:45.379740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.844 [2024-10-15 08:20:45.397317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.844 [2024-10-15 08:20:45.397367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.844 [2024-10-15 08:20:45.408330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.844 [2024-10-15 08:20:45.408365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.844 [2024-10-15 08:20:45.419802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.844 [2024-10-15 08:20:45.419853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.844 [2024-10-15 08:20:45.434447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.844 [2024-10-15 08:20:45.434484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.844 [2024-10-15 08:20:45.444582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.844 [2024-10-15 08:20:45.444617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.844 [2024-10-15 08:20:45.460028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.844 [2024-10-15 08:20:45.460064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.844 [2024-10-15 08:20:45.476730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.844 [2024-10-15 08:20:45.476775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.844 [2024-10-15 08:20:45.486895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.844 [2024-10-15 08:20:45.486932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.844 [2024-10-15 08:20:45.501955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.844 [2024-10-15 08:20:45.501990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.844 [2024-10-15 08:20:45.516747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.844 [2024-10-15 08:20:45.516797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.844 [2024-10-15 08:20:45.526054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.844 [2024-10-15 08:20:45.526090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.844 [2024-10-15 08:20:45.538201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.844 [2024-10-15 08:20:45.538236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.844 [2024-10-15 08:20:45.549553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.844 [2024-10-15 08:20:45.549588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.844 [2024-10-15 08:20:45.560832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.844 [2024-10-15 08:20:45.560867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.844 [2024-10-15 08:20:45.572965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.844 [2024-10-15 08:20:45.573000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.103 [2024-10-15 08:20:45.588913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.103 [2024-10-15 08:20:45.588951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.103 [2024-10-15 08:20:45.606584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.103 [2024-10-15 08:20:45.606619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.103 [2024-10-15 08:20:45.615991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.103 [2024-10-15 08:20:45.616041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.103 [2024-10-15 08:20:45.630741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.103 [2024-10-15 08:20:45.630776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.103 [2024-10-15 08:20:45.640379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.103 [2024-10-15 08:20:45.640413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.103 [2024-10-15 08:20:45.656108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.103 [2024-10-15 08:20:45.656162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.103 [2024-10-15 08:20:45.665900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.103 [2024-10-15 08:20:45.665934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.103 [2024-10-15 08:20:45.680918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.103 [2024-10-15 08:20:45.680953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.104 [2024-10-15 08:20:45.691530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.104 [2024-10-15 08:20:45.691565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.104 [2024-10-15 08:20:45.705979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.104 [2024-10-15 08:20:45.706015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.104 [2024-10-15 08:20:45.716080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.104 [2024-10-15 08:20:45.716141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.104 [2024-10-15 08:20:45.727889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.104 [2024-10-15 08:20:45.727924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.104 [2024-10-15 08:20:45.738734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.104 [2024-10-15 08:20:45.738769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.104 [2024-10-15 08:20:45.750217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.104 [2024-10-15 08:20:45.750252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.104 [2024-10-15 08:20:45.761593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.104 [2024-10-15 08:20:45.761627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.104 [2024-10-15 08:20:45.774619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.104 [2024-10-15 08:20:45.774669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.104 [2024-10-15 08:20:45.784331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.104 [2024-10-15 08:20:45.784364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.104 [2024-10-15 08:20:45.799803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.104 [2024-10-15 08:20:45.799841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.104 [2024-10-15 08:20:45.810178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.104 [2024-10-15 08:20:45.810212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.104 11286.67 IOPS, 88.18 MiB/s [2024-10-15T08:20:45.835Z] [2024-10-15 08:20:45.824936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.104 [2024-10-15 08:20:45.824965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.363 [2024-10-15 08:20:45.841016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.363 [2024-10-15 08:20:45.841053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.363 [2024-10-15 08:20:45.851143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.363 [2024-10-15 08:20:45.851176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.363 [2024-10-15 08:20:45.862684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.363 [2024-10-15 08:20:45.862719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.363 [2024-10-15 08:20:45.875533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.363 [2024-10-15 08:20:45.875577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.363 [2024-10-15 08:20:45.893219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.363 [2024-10-15 08:20:45.893275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.363 [2024-10-15 08:20:45.908336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.363 [2024-10-15 08:20:45.908370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.363 [2024-10-15 08:20:45.917846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.363 [2024-10-15 08:20:45.917879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.363 [2024-10-15 08:20:45.929925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.363 [2024-10-15 08:20:45.929960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.363 [2024-10-15 08:20:45.940758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.363 [2024-10-15 08:20:45.940801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.363 [2024-10-15 08:20:45.951932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.363 [2024-10-15 08:20:45.951969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.363 [2024-10-15 08:20:45.964778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.363 [2024-10-15 08:20:45.964813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.363 [2024-10-15 08:20:45.974722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.363 [2024-10-15 08:20:45.974756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.363 [2024-10-15 08:20:45.990304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.363 [2024-10-15 08:20:45.990338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.363 [2024-10-15 08:20:46.000547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.363 [2024-10-15 08:20:46.000594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.363 [2024-10-15 08:20:46.014954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.363 [2024-10-15 08:20:46.014989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.363 [2024-10-15 08:20:46.024835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.363 [2024-10-15 08:20:46.024870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.363 [2024-10-15 08:20:46.039489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.363 [2024-10-15 08:20:46.039524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.363 [2024-10-15 08:20:46.049946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.363 [2024-10-15 08:20:46.049980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.363 [2024-10-15 08:20:46.064673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.363 [2024-10-15 08:20:46.064706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.363 [2024-10-15 08:20:46.075126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.363 [2024-10-15 08:20:46.075158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.363 [2024-10-15 08:20:46.089910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.363 [2024-10-15 08:20:46.089945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.622 [2024-10-15 08:20:46.106456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.622 [2024-10-15 08:20:46.106524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.622 [2024-10-15 08:20:46.116430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.622 [2024-10-15 08:20:46.116480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.622 [2024-10-15 08:20:46.129049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.622 [2024-10-15 08:20:46.129117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.622 [2024-10-15 08:20:46.140677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.622 [2024-10-15 08:20:46.140711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.622 [2024-10-15 08:20:46.155724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.622 [2024-10-15 08:20:46.155758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.622 [2024-10-15 08:20:46.171899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.622 [2024-10-15 08:20:46.171933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.622 [2024-10-15 08:20:46.188186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.622 [2024-10-15 08:20:46.188264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.622 [2024-10-15 08:20:46.204971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.622 [2024-10-15 08:20:46.205021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.622 [2024-10-15 08:20:46.215259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.622 [2024-10-15 08:20:46.215292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.622 [2024-10-15 08:20:46.227011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.622 [2024-10-15 08:20:46.227064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.622 [2024-10-15 08:20:46.242317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.622 [2024-10-15 08:20:46.242361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.622 [2024-10-15 08:20:46.252585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.623 [2024-10-15 08:20:46.252620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.623 [2024-10-15 08:20:46.267585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.623 [2024-10-15 08:20:46.267635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.623 [2024-10-15 08:20:46.277948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.623 [2024-10-15 08:20:46.277982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.623 [2024-10-15 08:20:46.292910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.623 [2024-10-15 08:20:46.292961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.623 [2024-10-15 08:20:46.308756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.623 [2024-10-15 08:20:46.308800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.623 [2024-10-15 08:20:46.326755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.623 [2024-10-15 08:20:46.326807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.623 [2024-10-15 08:20:46.341466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.623 [2024-10-15 08:20:46.341500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.881 [2024-10-15 08:20:46.357857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.881 [2024-10-15 08:20:46.357892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.881 [2024-10-15 08:20:46.367929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.881 [2024-10-15 08:20:46.367961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.881 [2024-10-15 08:20:46.379288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.881 [2024-10-15 08:20:46.379323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.881 [2024-10-15 08:20:46.390026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.881 [2024-10-15 08:20:46.390060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.881 [2024-10-15 08:20:46.401136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.881 [2024-10-15 08:20:46.401169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.881 [2024-10-15 08:20:46.414567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.881 [2024-10-15 08:20:46.414603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.881 [2024-10-15 08:20:46.430920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.881 [2024-10-15 08:20:46.430956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.881 [2024-10-15 08:20:46.448689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.881 [2024-10-15 08:20:46.448723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.881 [2024-10-15 08:20:46.463544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.881 [2024-10-15 08:20:46.463582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.881 [2024-10-15 08:20:46.479536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.881 [2024-10-15 08:20:46.479571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.881 [2024-10-15 08:20:46.489574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.882 [2024-10-15 08:20:46.489606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.882 [2024-10-15 08:20:46.504595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.882 [2024-10-15 08:20:46.504630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.882 [2024-10-15 08:20:46.519750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.882 [2024-10-15 08:20:46.519799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.882 [2024-10-15 08:20:46.530130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.882 [2024-10-15 08:20:46.530177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.882 [2024-10-15 08:20:46.542617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.882 [2024-10-15 08:20:46.542653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.882 [2024-10-15 08:20:46.558233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.882 [2024-10-15 08:20:46.558295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.882 [2024-10-15 08:20:46.568939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.882 [2024-10-15 08:20:46.568974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.882 [2024-10-15 08:20:46.584797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.882 [2024-10-15 08:20:46.584838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.882 [2024-10-15 08:20:46.595180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.882 [2024-10-15 08:20:46.595219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.882 [2024-10-15 08:20:46.610123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.882 [2024-10-15 08:20:46.610171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.140 [2024-10-15 08:20:46.620281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.140 [2024-10-15 08:20:46.620319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.140 [2024-10-15 08:20:46.635632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.140 [2024-10-15 08:20:46.635674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.140 [2024-10-15 08:20:46.646100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.140 [2024-10-15 08:20:46.646298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.140 [2024-10-15 08:20:46.661333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.140 [2024-10-15 08:20:46.661507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.140 [2024-10-15 08:20:46.678553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.140 [2024-10-15 08:20:46.678594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.140 [2024-10-15 08:20:46.688042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.141 [2024-10-15 08:20:46.688083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.141 [2024-10-15 08:20:46.703393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.141 [2024-10-15 08:20:46.703555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.141 [2024-10-15 08:20:46.713503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.141 [2024-10-15 08:20:46.713543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.141 [2024-10-15 08:20:46.727635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.141 [2024-10-15 08:20:46.727678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.141 [2024-10-15 08:20:46.738150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.141 [2024-10-15 08:20:46.738189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.141 [2024-10-15 08:20:46.753090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.141 [2024-10-15 08:20:46.753140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.141 [2024-10-15 08:20:46.769248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.141 [2024-10-15 08:20:46.769287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.141 [2024-10-15 08:20:46.787742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.141 [2024-10-15 08:20:46.787782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.141 [2024-10-15 08:20:46.798244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.141 [2024-10-15 08:20:46.798283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.141 [2024-10-15 08:20:46.809432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.141 [2024-10-15 08:20:46.809604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.141 [2024-10-15 08:20:46.821021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.141 [2024-10-15 08:20:46.821217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.141 11298.25 IOPS, 88.27 MiB/s [2024-10-15T08:20:46.872Z] [2024-10-15 08:20:46.836188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.141 [2024-10-15 08:20:46.836238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.141 [2024-10-15 08:20:46.846948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.141 [2024-10-15 08:20:46.846988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.141 [2024-10-15 08:20:46.861248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.141 [2024-10-15 08:20:46.861286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.400 [2024-10-15 08:20:46.871999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.400 [2024-10-15 08:20:46.872184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.400 [2024-10-15 08:20:46.883090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.400 [2024-10-15 08:20:46.883137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.400 [2024-10-15 08:20:46.899218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.400 [2024-10-15 08:20:46.899253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.400 [2024-10-15 08:20:46.909717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.400 [2024-10-15 08:20:46.909753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.400 [2024-10-15 08:20:46.921451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.400 [2024-10-15 08:20:46.921485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.400 [2024-10-15 08:20:46.936869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.400 [2024-10-15 08:20:46.936907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.400 [2024-10-15 08:20:46.946846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.400 [2024-10-15 08:20:46.946882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.400 [2024-10-15 08:20:46.962552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.400 [2024-10-15 08:20:46.962587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.400 [2024-10-15 08:20:46.972454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.400 [2024-10-15 08:20:46.972488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.400 [2024-10-15 08:20:46.984319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.400 [2024-10-15 08:20:46.984355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.400 [2024-10-15 08:20:46.995289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.400 [2024-10-15 08:20:46.995325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.400 [2024-10-15 08:20:47.007534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.400 [2024-10-15 08:20:47.007578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.400 [2024-10-15 08:20:47.023188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.400 [2024-10-15 08:20:47.023237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.400 [2024-10-15 08:20:47.032282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.400 [2024-10-15 08:20:47.032317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.400 [2024-10-15 08:20:47.048590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.400 [2024-10-15 08:20:47.048628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.400 [2024-10-15 08:20:47.059346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.400 [2024-10-15 08:20:47.059378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.400 [2024-10-15 08:20:47.070821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.400 [2024-10-15 08:20:47.070855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.400 [2024-10-15 08:20:47.087339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.400 [2024-10-15 08:20:47.087373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.400 [2024-10-15 08:20:47.104566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.400 [2024-10-15 08:20:47.104603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.400 [2024-10-15 08:20:47.114907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.400 [2024-10-15 08:20:47.114942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.130088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.130138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.140638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.140674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.155643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.155678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.166730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.166764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.181356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.181393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.191759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.191793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.203532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.203567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.214568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.214603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.227340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.227376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.245472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.245508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.260682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.260718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.270399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.270434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.285564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.285599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.295978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.296012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.307374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.307409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.323195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.323226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.341245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.341293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.351531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.351570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.366786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.366821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.377320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.377353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.391976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.392011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.409389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.409434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.425386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.425437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.442289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.442324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.733 [2024-10-15 08:20:47.451913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.733 [2024-10-15 08:20:47.451947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.993 [2024-10-15 08:20:47.464156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.993 [2024-10-15 08:20:47.464191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.993 [2024-10-15 08:20:47.475692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.993 [2024-10-15 08:20:47.475728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.993 [2024-10-15 08:20:47.491606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.993 [2024-10-15 08:20:47.491640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.993 [2024-10-15 08:20:47.506840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.993 [2024-10-15 08:20:47.506907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.993 [2024-10-15 08:20:47.516415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.993 [2024-10-15 08:20:47.516452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.993 [2024-10-15 08:20:47.528441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.993 [2024-10-15 08:20:47.528476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.993 [2024-10-15 08:20:47.544368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.993 [2024-10-15 08:20:47.544409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.993 [2024-10-15 08:20:47.560667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.993 [2024-10-15 08:20:47.560703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.993 [2024-10-15 08:20:47.569839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.993 [2024-10-15 08:20:47.569874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.993 [2024-10-15 08:20:47.583356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.993 [2024-10-15 08:20:47.583392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.993 [2024-10-15 08:20:47.594374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.993 [2024-10-15 08:20:47.594408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.993 [2024-10-15 08:20:47.605658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.993 [2024-10-15 08:20:47.605692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.993 [2024-10-15 08:20:47.621153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.993 [2024-10-15 08:20:47.621188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.993 [2024-10-15 08:20:47.637846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.993 [2024-10-15 08:20:47.637882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.993 [2024-10-15 08:20:47.647746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.993 [2024-10-15 08:20:47.647781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.993 [2024-10-15 08:20:47.660013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.993 [2024-10-15 08:20:47.660063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.993 [2024-10-15 08:20:47.670785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.993 [2024-10-15 08:20:47.670821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.993 [2024-10-15 08:20:47.688124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.993 [2024-10-15 08:20:47.688187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.993 [2024-10-15 08:20:47.706747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.993 [2024-10-15 08:20:47.706783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.993 [2024-10-15 08:20:47.717016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.993 [2024-10-15 08:20:47.717051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 [2024-10-15 08:20:47.732502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.732547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 [2024-10-15 08:20:47.749388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.749433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 [2024-10-15 08:20:47.759677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.759712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 [2024-10-15 08:20:47.772031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.772082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 [2024-10-15 08:20:47.783224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.783256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 [2024-10-15 08:20:47.800036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.800114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 [2024-10-15 08:20:47.817660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.817695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 11301.60 IOPS, 88.29 MiB/s [2024-10-15T08:20:47.984Z] [2024-10-15 08:20:47.832353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.832387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 [2024-10-15 08:20:47.837988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.838020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 00:09:46.253 Latency(us) 00:09:46.253 [2024-10-15T08:20:47.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.253 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:46.253 Nvme1n1 : 5.01 11303.17 88.31 0.00 0.00 11308.93 4825.83 23354.65 00:09:46.253 [2024-10-15T08:20:47.984Z] =================================================================================================================== 00:09:46.253 [2024-10-15T08:20:47.984Z] Total : 11303.17 88.31 0.00 0.00 11308.93 4825.83 23354.65 00:09:46.253 [2024-10-15 08:20:47.845976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.846009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 [2024-10-15 08:20:47.853970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.854047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 [2024-10-15 08:20:47.861968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.861996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 [2024-10-15 08:20:47.874011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.874054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 [2024-10-15 08:20:47.882002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.882069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 [2024-10-15 08:20:47.890021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.890055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 [2024-10-15 08:20:47.902023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.902080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 [2024-10-15 08:20:47.914012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.914053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 [2024-10-15 08:20:47.926021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.926086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 [2024-10-15 08:20:47.938030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.938072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 [2024-10-15 08:20:47.946019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.946056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 [2024-10-15 08:20:47.958052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.958112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 [2024-10-15 08:20:47.966022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.966059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.253 [2024-10-15 08:20:47.978056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.253 [2024-10-15 08:20:47.978106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.512 [2024-10-15 08:20:47.986029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.512 [2024-10-15 08:20:47.986068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.512 [2024-10-15 08:20:47.998053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.512 [2024-10-15 08:20:47.998094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.512 [2024-10-15 08:20:48.010056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.512 [2024-10-15 08:20:48.010099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.512 [2024-10-15 08:20:48.018040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.512 [2024-10-15 08:20:48.018075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.512 [2024-10-15 08:20:48.030041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.512 [2024-10-15 08:20:48.030074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.512 [2024-10-15 08:20:48.038026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.512 [2024-10-15 08:20:48.038053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.512 [2024-10-15 08:20:48.046026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.512 [2024-10-15 08:20:48.046054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.512 [2024-10-15 08:20:48.054044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.512 [2024-10-15 08:20:48.054077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.512 [2024-10-15 08:20:48.062046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.512 [2024-10-15 08:20:48.062081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.512 [2024-10-15 08:20:48.074067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.512 [2024-10-15 08:20:48.074114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.512 [2024-10-15 08:20:48.082034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.512 [2024-10-15 08:20:48.082061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.512 [2024-10-15 08:20:48.090032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.512 [2024-10-15 08:20:48.090060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.512 [2024-10-15 08:20:48.098034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.512 [2024-10-15 08:20:48.098061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.512 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65905) - No such process 00:09:46.512 08:20:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65905 00:09:46.512 08:20:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.512 08:20:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.512 08:20:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.512 08:20:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.512 08:20:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:46.512 08:20:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.512 08:20:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.512 delay0 00:09:46.512 08:20:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.512 08:20:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:46.512 08:20:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.512 08:20:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.512 08:20:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.512 08:20:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:09:46.770 [2024-10-15 08:20:48.298097] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:53.331 Initializing NVMe Controllers 00:09:53.331 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:53.331 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:53.331 Initialization complete. Launching workers. 00:09:53.331 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 108 00:09:53.331 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 395, failed to submit 33 00:09:53.331 success 265, unsuccessful 130, failed 0 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:53.331 rmmod nvme_tcp 00:09:53.331 rmmod nvme_fabrics 00:09:53.331 rmmod nvme_keyring 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 65755 ']' 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 65755 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 65755 ']' 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 65755 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65755 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:53.331 killing process with pid 65755 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65755' 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 65755 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 65755 00:09:53.331 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.332 08:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.332 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:53.332 00:09:53.332 real 0m25.387s 00:09:53.332 user 0m41.025s 00:09:53.332 sys 0m7.017s 00:09:53.332 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.332 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.332 ************************************ 00:09:53.332 END TEST nvmf_zcopy 00:09:53.332 ************************************ 00:09:53.332 08:20:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:53.332 08:20:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:53.332 08:20:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.332 08:20:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:53.590 ************************************ 00:09:53.590 START TEST nvmf_nmic 00:09:53.590 ************************************ 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:53.590 * Looking for test storage... 00:09:53.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:53.590 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:53.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.591 --rc genhtml_branch_coverage=1 00:09:53.591 --rc genhtml_function_coverage=1 00:09:53.591 --rc genhtml_legend=1 00:09:53.591 --rc geninfo_all_blocks=1 00:09:53.591 --rc geninfo_unexecuted_blocks=1 00:09:53.591 00:09:53.591 ' 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:53.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.591 --rc genhtml_branch_coverage=1 00:09:53.591 --rc genhtml_function_coverage=1 00:09:53.591 --rc genhtml_legend=1 00:09:53.591 --rc geninfo_all_blocks=1 00:09:53.591 --rc geninfo_unexecuted_blocks=1 00:09:53.591 00:09:53.591 ' 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:53.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.591 --rc genhtml_branch_coverage=1 00:09:53.591 --rc genhtml_function_coverage=1 00:09:53.591 --rc genhtml_legend=1 00:09:53.591 --rc geninfo_all_blocks=1 00:09:53.591 --rc geninfo_unexecuted_blocks=1 00:09:53.591 00:09:53.591 ' 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:53.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.591 --rc genhtml_branch_coverage=1 00:09:53.591 --rc genhtml_function_coverage=1 00:09:53.591 --rc genhtml_legend=1 00:09:53.591 --rc geninfo_all_blocks=1 00:09:53.591 --rc geninfo_unexecuted_blocks=1 00:09:53.591 00:09:53.591 ' 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:53.591 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # nvmf_veth_init 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:53.591 Cannot find device "nvmf_init_br" 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:53.591 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:53.849 Cannot find device "nvmf_init_br2" 00:09:53.849 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:53.849 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:53.849 Cannot find device "nvmf_tgt_br" 00:09:53.849 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:53.849 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:53.849 Cannot find device "nvmf_tgt_br2" 00:09:53.849 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:53.850 Cannot find device "nvmf_init_br" 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:53.850 Cannot find device "nvmf_init_br2" 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:53.850 Cannot find device "nvmf_tgt_br" 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:53.850 Cannot find device "nvmf_tgt_br2" 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:53.850 Cannot find device "nvmf_br" 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:53.850 Cannot find device "nvmf_init_if" 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:53.850 Cannot find device "nvmf_init_if2" 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:53.850 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:53.850 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:53.850 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:54.109 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:54.109 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:09:54.109 00:09:54.109 --- 10.0.0.3 ping statistics --- 00:09:54.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.109 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:54.109 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:54.109 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:09:54.109 00:09:54.109 --- 10.0.0.4 ping statistics --- 00:09:54.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.109 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:54.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:54.109 00:09:54.109 --- 10.0.0.1 ping statistics --- 00:09:54.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.109 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:54.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:09:54.109 00:09:54.109 --- 10.0.0.2 ping statistics --- 00:09:54.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.109 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # return 0 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=66286 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 66286 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 66286 ']' 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.109 08:20:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.109 [2024-10-15 08:20:55.758114] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:09:54.109 [2024-10-15 08:20:55.758235] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.368 [2024-10-15 08:20:55.899230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:54.368 [2024-10-15 08:20:55.984473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.368 [2024-10-15 08:20:55.984548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.368 [2024-10-15 08:20:55.984563] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.368 [2024-10-15 08:20:55.984574] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.368 [2024-10-15 08:20:55.984583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.368 [2024-10-15 08:20:55.986087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.368 [2024-10-15 08:20:55.986196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.368 [2024-10-15 08:20:55.986291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.368 [2024-10-15 08:20:55.986291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:54.368 [2024-10-15 08:20:56.061336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.627 [2024-10-15 08:20:56.184238] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.627 Malloc0 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.627 [2024-10-15 08:20:56.257513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:54.627 test case1: single bdev can't be used in multiple subsystems 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.627 [2024-10-15 08:20:56.281306] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:54.627 [2024-10-15 08:20:56.281346] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:54.627 [2024-10-15 08:20:56.281358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.627 request: 00:09:54.627 { 00:09:54.627 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:54.627 "namespace": { 00:09:54.627 "bdev_name": "Malloc0", 00:09:54.627 "no_auto_visible": false 00:09:54.627 }, 00:09:54.627 "method": "nvmf_subsystem_add_ns", 00:09:54.627 "req_id": 1 00:09:54.627 } 00:09:54.627 Got JSON-RPC error response 00:09:54.627 response: 00:09:54.627 { 00:09:54.627 "code": -32602, 00:09:54.627 "message": "Invalid parameters" 00:09:54.627 } 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:54.627 Adding namespace failed - expected result. 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:54.627 test case2: host connect to nvmf target in multiple paths 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.627 [2024-10-15 08:20:56.297474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.627 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:54.886 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:54.886 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:54.886 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:54.886 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:54.886 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:54.886 08:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:57.419 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:57.419 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:57.419 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:57.419 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:57.419 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:57.419 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:57.419 08:20:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:57.419 [global] 00:09:57.419 thread=1 00:09:57.419 invalidate=1 00:09:57.419 rw=write 00:09:57.419 time_based=1 00:09:57.419 runtime=1 00:09:57.419 ioengine=libaio 00:09:57.419 direct=1 00:09:57.419 bs=4096 00:09:57.419 iodepth=1 00:09:57.419 norandommap=0 00:09:57.419 numjobs=1 00:09:57.419 00:09:57.419 verify_dump=1 00:09:57.419 verify_backlog=512 00:09:57.419 verify_state_save=0 00:09:57.419 do_verify=1 00:09:57.419 verify=crc32c-intel 00:09:57.419 [job0] 00:09:57.419 filename=/dev/nvme0n1 00:09:57.419 Could not set queue depth (nvme0n1) 00:09:57.419 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.419 fio-3.35 00:09:57.419 Starting 1 thread 00:09:58.355 00:09:58.355 job0: (groupid=0, jobs=1): err= 0: pid=66366: Tue Oct 15 08:20:59 2024 00:09:58.355 read: IOPS=2561, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:58.355 slat (nsec): min=13112, max=57604, avg=16501.20, stdev=3394.28 00:09:58.355 clat (usec): min=160, max=871, avg=196.45, stdev=28.19 00:09:58.355 lat (usec): min=176, max=886, avg=212.95, stdev=28.79 00:09:58.355 clat percentiles (usec): 00:09:58.355 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:09:58.355 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 198], 00:09:58.355 | 70.00th=[ 202], 80.00th=[ 206], 90.00th=[ 212], 95.00th=[ 219], 00:09:58.355 | 99.00th=[ 237], 99.50th=[ 338], 99.90th=[ 603], 99.95th=[ 619], 00:09:58.355 | 99.99th=[ 873] 00:09:58.355 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:58.355 slat (nsec): min=17925, max=98880, avg=24319.87, stdev=5195.75 00:09:58.355 clat (usec): min=96, max=754, avg=120.02, stdev=21.14 00:09:58.355 lat (usec): min=119, max=787, avg=144.34, stdev=22.63 00:09:58.355 clat percentiles (usec): 00:09:58.355 | 1.00th=[ 103], 5.00th=[ 106], 10.00th=[ 109], 20.00th=[ 111], 00:09:58.355 | 30.00th=[ 114], 40.00th=[ 116], 50.00th=[ 118], 60.00th=[ 120], 00:09:58.355 | 70.00th=[ 123], 80.00th=[ 128], 90.00th=[ 135], 95.00th=[ 141], 00:09:58.355 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 293], 99.95th=[ 742], 00:09:58.355 | 99.99th=[ 758] 00:09:58.355 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:58.355 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:58.355 lat (usec) : 100=0.05%, 250=99.61%, 500=0.16%, 750=0.14%, 1000=0.04% 00:09:58.355 cpu : usr=2.90%, sys=8.40%, ctx=5643, majf=0, minf=5 00:09:58.355 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.355 issued rwts: total=2564,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.355 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.355 00:09:58.355 Run status group 0 (all jobs): 00:09:58.355 READ: bw=10.0MiB/s (10.5MB/s), 10.0MiB/s-10.0MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:09:58.355 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:58.355 00:09:58.355 Disk stats (read/write): 00:09:58.355 nvme0n1: ios=2474/2560, merge=0/0, ticks=522/329, in_queue=851, util=91.37% 00:09:58.355 08:20:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:58.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:58.355 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:58.355 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:58.355 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:58.355 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:58.613 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:58.613 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:58.613 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:58.613 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:58.613 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:58.614 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:58.614 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:58.614 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:58.614 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:58.614 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.614 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:58.614 rmmod nvme_tcp 00:09:58.614 rmmod nvme_fabrics 00:09:58.614 rmmod nvme_keyring 00:09:58.614 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.614 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:58.614 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:58.614 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 66286 ']' 00:09:58.614 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 66286 00:09:58.614 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 66286 ']' 00:09:58.614 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 66286 00:09:58.614 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:58.614 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:58.614 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66286 00:09:58.614 killing process with pid 66286 00:09:58.614 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:58.614 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:58.614 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66286' 00:09:58.614 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 66286 00:09:58.614 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 66286 00:09:58.872 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:58.872 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:58.872 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:58.872 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:58.872 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:09:58.872 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:58.872 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:09:58.872 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:58.872 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:58.872 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:58.872 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:58.872 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:58.872 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:59.131 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:59.131 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:59.131 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:59.131 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:59.131 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:59.131 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:59.131 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:59.131 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:59.131 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:59.131 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:59.131 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.131 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.131 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.131 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:59.131 00:09:59.131 real 0m5.733s 00:09:59.131 user 0m16.855s 00:09:59.131 sys 0m2.371s 00:09:59.131 ************************************ 00:09:59.131 END TEST nvmf_nmic 00:09:59.131 ************************************ 00:09:59.131 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.131 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.131 08:21:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:59.131 08:21:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:59.131 08:21:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.131 08:21:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:59.131 ************************************ 00:09:59.131 START TEST nvmf_fio_target 00:09:59.131 ************************************ 00:09:59.131 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:59.390 * Looking for test storage... 00:09:59.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:59.390 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:59.390 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:59.390 08:21:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:59.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.390 --rc genhtml_branch_coverage=1 00:09:59.390 --rc genhtml_function_coverage=1 00:09:59.390 --rc genhtml_legend=1 00:09:59.390 --rc geninfo_all_blocks=1 00:09:59.390 --rc geninfo_unexecuted_blocks=1 00:09:59.390 00:09:59.390 ' 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:59.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.390 --rc genhtml_branch_coverage=1 00:09:59.390 --rc genhtml_function_coverage=1 00:09:59.390 --rc genhtml_legend=1 00:09:59.390 --rc geninfo_all_blocks=1 00:09:59.390 --rc geninfo_unexecuted_blocks=1 00:09:59.390 00:09:59.390 ' 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:59.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.390 --rc genhtml_branch_coverage=1 00:09:59.390 --rc genhtml_function_coverage=1 00:09:59.390 --rc genhtml_legend=1 00:09:59.390 --rc geninfo_all_blocks=1 00:09:59.390 --rc geninfo_unexecuted_blocks=1 00:09:59.390 00:09:59.390 ' 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:59.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.390 --rc genhtml_branch_coverage=1 00:09:59.390 --rc genhtml_function_coverage=1 00:09:59.390 --rc genhtml_legend=1 00:09:59.390 --rc geninfo_all_blocks=1 00:09:59.390 --rc geninfo_unexecuted_blocks=1 00:09:59.390 00:09:59.390 ' 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:59.390 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:59.391 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # nvmf_veth_init 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:59.391 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:59.392 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:59.392 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.392 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:59.392 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:59.392 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:59.392 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:59.392 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:59.392 Cannot find device "nvmf_init_br" 00:09:59.392 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:59.392 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:59.392 Cannot find device "nvmf_init_br2" 00:09:59.392 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:59.392 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:59.392 Cannot find device "nvmf_tgt_br" 00:09:59.392 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:59.392 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:59.392 Cannot find device "nvmf_tgt_br2" 00:09:59.392 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:59.392 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:59.651 Cannot find device "nvmf_init_br" 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:59.651 Cannot find device "nvmf_init_br2" 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:59.651 Cannot find device "nvmf_tgt_br" 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:59.651 Cannot find device "nvmf_tgt_br2" 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:59.651 Cannot find device "nvmf_br" 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:59.651 Cannot find device "nvmf_init_if" 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:59.651 Cannot find device "nvmf_init_if2" 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:59.651 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:59.651 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:59.651 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:59.910 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:59.910 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:59.910 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:59.910 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:59.910 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:59.910 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:59.910 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:59.910 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:59.910 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:59.910 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:59.910 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:09:59.910 00:09:59.910 --- 10.0.0.3 ping statistics --- 00:09:59.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.910 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:09:59.910 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:59.910 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:59.910 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:09:59.910 00:09:59.910 --- 10.0.0.4 ping statistics --- 00:09:59.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.910 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:09:59.910 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:59.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:59.910 00:09:59.911 --- 10.0.0.1 ping statistics --- 00:09:59.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.911 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:59.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:09:59.911 00:09:59.911 --- 10.0.0.2 ping statistics --- 00:09:59.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.911 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # return 0 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=66598 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 66598 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 66598 ']' 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:59.911 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.911 [2024-10-15 08:21:01.530358] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:09:59.911 [2024-10-15 08:21:01.531150] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.169 [2024-10-15 08:21:01.676718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.169 [2024-10-15 08:21:01.763690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.169 [2024-10-15 08:21:01.763780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.169 [2024-10-15 08:21:01.763796] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.170 [2024-10-15 08:21:01.763807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.170 [2024-10-15 08:21:01.763816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.170 [2024-10-15 08:21:01.765294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.170 [2024-10-15 08:21:01.765539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.170 [2024-10-15 08:21:01.765413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.170 [2024-10-15 08:21:01.765530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.170 [2024-10-15 08:21:01.839040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:00.428 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:00.428 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:00.428 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:00.428 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:00.428 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.428 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.428 08:21:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:00.687 [2024-10-15 08:21:02.265244] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.687 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:00.945 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:00.945 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:01.204 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:01.462 08:21:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:01.721 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:01.721 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:01.979 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:01.979 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:02.546 08:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.804 08:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:02.804 08:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:03.063 08:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:03.063 08:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:03.321 08:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:03.321 08:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:03.599 08:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:04.166 08:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:04.166 08:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:04.425 08:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:04.425 08:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:04.425 08:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:04.991 [2024-10-15 08:21:06.454487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:04.991 08:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:05.250 08:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:05.508 08:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:05.508 08:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:05.508 08:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:05.508 08:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:05.508 08:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:05.508 08:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:05.508 08:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:07.488 08:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:07.488 08:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:07.488 08:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:07.488 08:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:07.488 08:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:07.488 08:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:07.488 08:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:07.488 [global] 00:10:07.488 thread=1 00:10:07.488 invalidate=1 00:10:07.488 rw=write 00:10:07.488 time_based=1 00:10:07.488 runtime=1 00:10:07.488 ioengine=libaio 00:10:07.488 direct=1 00:10:07.488 bs=4096 00:10:07.488 iodepth=1 00:10:07.488 norandommap=0 00:10:07.488 numjobs=1 00:10:07.488 00:10:07.746 verify_dump=1 00:10:07.746 verify_backlog=512 00:10:07.746 verify_state_save=0 00:10:07.746 do_verify=1 00:10:07.746 verify=crc32c-intel 00:10:07.746 [job0] 00:10:07.746 filename=/dev/nvme0n1 00:10:07.746 [job1] 00:10:07.746 filename=/dev/nvme0n2 00:10:07.746 [job2] 00:10:07.746 filename=/dev/nvme0n3 00:10:07.746 [job3] 00:10:07.746 filename=/dev/nvme0n4 00:10:07.746 Could not set queue depth (nvme0n1) 00:10:07.746 Could not set queue depth (nvme0n2) 00:10:07.746 Could not set queue depth (nvme0n3) 00:10:07.746 Could not set queue depth (nvme0n4) 00:10:07.746 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.746 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.746 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.746 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.746 fio-3.35 00:10:07.746 Starting 4 threads 00:10:09.123 00:10:09.123 job0: (groupid=0, jobs=1): err= 0: pid=66791: Tue Oct 15 08:21:10 2024 00:10:09.123 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:09.123 slat (nsec): min=12354, max=76760, avg=15557.78, stdev=3438.84 00:10:09.123 clat (usec): min=144, max=2248, avg=194.92, stdev=66.24 00:10:09.123 lat (usec): min=158, max=2274, avg=210.48, stdev=66.44 00:10:09.123 clat percentiles (usec): 00:10:09.123 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:10:09.123 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 182], 00:10:09.123 | 70.00th=[ 188], 80.00th=[ 219], 90.00th=[ 260], 95.00th=[ 322], 00:10:09.123 | 99.00th=[ 355], 99.50th=[ 367], 99.90th=[ 586], 99.95th=[ 1090], 00:10:09.123 | 99.99th=[ 2245] 00:10:09.123 write: IOPS=2797, BW=10.9MiB/s (11.5MB/s)(10.9MiB/1001msec); 0 zone resets 00:10:09.123 slat (nsec): min=13040, max=92444, avg=22631.68, stdev=5245.34 00:10:09.124 clat (usec): min=94, max=818, avg=138.66, stdev=40.68 00:10:09.124 lat (usec): min=114, max=838, avg=161.29, stdev=41.03 00:10:09.124 clat percentiles (usec): 00:10:09.124 | 1.00th=[ 100], 5.00th=[ 109], 10.00th=[ 113], 20.00th=[ 118], 00:10:09.124 | 30.00th=[ 122], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 133], 00:10:09.124 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 188], 95.00th=[ 227], 00:10:09.124 | 99.00th=[ 255], 99.50th=[ 265], 99.90th=[ 611], 99.95th=[ 635], 00:10:09.124 | 99.99th=[ 816] 00:10:09.124 bw ( KiB/s): min=12288, max=12288, per=32.52%, avg=12288.00, stdev= 0.00, samples=1 00:10:09.124 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:09.124 lat (usec) : 100=0.56%, 250=92.61%, 500=6.66%, 750=0.11%, 1000=0.02% 00:10:09.124 lat (msec) : 2=0.02%, 4=0.02% 00:10:09.124 cpu : usr=1.70%, sys=8.40%, ctx=5363, majf=0, minf=3 00:10:09.124 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.124 issued rwts: total=2560,2800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.124 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.124 job1: (groupid=0, jobs=1): err= 0: pid=66792: Tue Oct 15 08:21:10 2024 00:10:09.124 read: IOPS=1711, BW=6845KiB/s (7009kB/s)(6852KiB/1001msec) 00:10:09.124 slat (nsec): min=14010, max=56287, avg=21174.42, stdev=7475.73 00:10:09.124 clat (usec): min=149, max=915, avg=298.09, stdev=69.15 00:10:09.124 lat (usec): min=167, max=943, avg=319.26, stdev=70.46 00:10:09.124 clat percentiles (usec): 00:10:09.124 | 1.00th=[ 188], 5.00th=[ 239], 10.00th=[ 251], 20.00th=[ 262], 00:10:09.124 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:10:09.124 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 363], 95.00th=[ 490], 00:10:09.124 | 99.00th=[ 537], 99.50th=[ 545], 99.90th=[ 570], 99.95th=[ 914], 00:10:09.124 | 99.99th=[ 914] 00:10:09.124 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:09.124 slat (usec): min=17, max=124, avg=28.49, stdev=10.01 00:10:09.124 clat (usec): min=97, max=394, avg=188.38, stdev=36.70 00:10:09.124 lat (usec): min=118, max=518, avg=216.86, stdev=39.08 00:10:09.124 clat percentiles (usec): 00:10:09.124 | 1.00th=[ 104], 5.00th=[ 117], 10.00th=[ 127], 20.00th=[ 159], 00:10:09.124 | 30.00th=[ 180], 40.00th=[ 190], 50.00th=[ 198], 60.00th=[ 204], 00:10:09.124 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 239], 00:10:09.124 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 297], 99.95th=[ 330], 00:10:09.124 | 99.99th=[ 396] 00:10:09.124 bw ( KiB/s): min= 8192, max= 8192, per=21.68%, avg=8192.00, stdev= 0.00, samples=1 00:10:09.124 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:09.124 lat (usec) : 100=0.13%, 250=57.46%, 500=40.49%, 750=1.89%, 1000=0.03% 00:10:09.124 cpu : usr=2.00%, sys=7.30%, ctx=3777, majf=0, minf=9 00:10:09.124 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.124 issued rwts: total=1713,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.124 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.124 job2: (groupid=0, jobs=1): err= 0: pid=66793: Tue Oct 15 08:21:10 2024 00:10:09.124 read: IOPS=1607, BW=6430KiB/s (6584kB/s)(6436KiB/1001msec) 00:10:09.124 slat (nsec): min=13386, max=57906, avg=17648.04, stdev=4744.48 00:10:09.124 clat (usec): min=182, max=940, avg=298.17, stdev=53.17 00:10:09.124 lat (usec): min=207, max=968, avg=315.82, stdev=54.71 00:10:09.124 clat percentiles (usec): 00:10:09.124 | 1.00th=[ 243], 5.00th=[ 255], 10.00th=[ 265], 20.00th=[ 269], 00:10:09.124 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:10:09.124 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 359], 95.00th=[ 404], 00:10:09.124 | 99.00th=[ 498], 99.50th=[ 519], 99.90th=[ 898], 99.95th=[ 938], 00:10:09.124 | 99.99th=[ 938] 00:10:09.124 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:09.124 slat (usec): min=19, max=124, avg=27.79, stdev= 9.56 00:10:09.124 clat (usec): min=108, max=1672, avg=208.66, stdev=62.24 00:10:09.124 lat (usec): min=133, max=1693, avg=236.45, stdev=67.49 00:10:09.124 clat percentiles (usec): 00:10:09.124 | 1.00th=[ 124], 5.00th=[ 135], 10.00th=[ 143], 20.00th=[ 178], 00:10:09.124 | 30.00th=[ 188], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 208], 00:10:09.124 | 70.00th=[ 217], 80.00th=[ 227], 90.00th=[ 285], 95.00th=[ 338], 00:10:09.124 | 99.00th=[ 375], 99.50th=[ 383], 99.90th=[ 396], 99.95th=[ 429], 00:10:09.124 | 99.99th=[ 1680] 00:10:09.124 bw ( KiB/s): min= 8192, max= 8192, per=21.68%, avg=8192.00, stdev= 0.00, samples=1 00:10:09.124 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:09.124 lat (usec) : 250=50.37%, 500=49.17%, 750=0.36%, 1000=0.08% 00:10:09.124 lat (msec) : 2=0.03% 00:10:09.124 cpu : usr=1.40%, sys=6.90%, ctx=3657, majf=0, minf=11 00:10:09.124 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.124 issued rwts: total=1609,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.124 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.124 job3: (groupid=0, jobs=1): err= 0: pid=66794: Tue Oct 15 08:21:10 2024 00:10:09.124 read: IOPS=2489, BW=9958KiB/s (10.2MB/s)(9968KiB/1001msec) 00:10:09.124 slat (nsec): min=9433, max=63230, avg=17096.67, stdev=4867.99 00:10:09.124 clat (usec): min=144, max=7334, avg=206.25, stdev=199.88 00:10:09.124 lat (usec): min=164, max=7355, avg=223.34, stdev=199.88 00:10:09.124 clat percentiles (usec): 00:10:09.124 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:10:09.124 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 192], 00:10:09.124 | 70.00th=[ 198], 80.00th=[ 227], 90.00th=[ 255], 95.00th=[ 273], 00:10:09.124 | 99.00th=[ 338], 99.50th=[ 351], 99.90th=[ 2671], 99.95th=[ 5997], 00:10:09.124 | 99.99th=[ 7308] 00:10:09.124 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:09.124 slat (usec): min=14, max=104, avg=25.58, stdev= 7.91 00:10:09.124 clat (usec): min=84, max=936, avg=143.62, stdev=29.42 00:10:09.124 lat (usec): min=125, max=960, avg=169.20, stdev=28.72 00:10:09.124 clat percentiles (usec): 00:10:09.124 | 1.00th=[ 113], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 128], 00:10:09.124 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 141], 00:10:09.124 | 70.00th=[ 145], 80.00th=[ 153], 90.00th=[ 167], 95.00th=[ 194], 00:10:09.124 | 99.00th=[ 241], 99.50th=[ 251], 99.90th=[ 302], 99.95th=[ 371], 00:10:09.124 | 99.99th=[ 938] 00:10:09.124 bw ( KiB/s): min=12288, max=12288, per=32.52%, avg=12288.00, stdev= 0.00, samples=1 00:10:09.124 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:09.124 lat (usec) : 100=0.02%, 250=93.86%, 500=5.94%, 750=0.06%, 1000=0.04% 00:10:09.124 lat (msec) : 4=0.04%, 10=0.04% 00:10:09.124 cpu : usr=2.30%, sys=8.40%, ctx=5054, majf=0, minf=16 00:10:09.124 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.124 issued rwts: total=2492,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.124 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.124 00:10:09.124 Run status group 0 (all jobs): 00:10:09.124 READ: bw=32.7MiB/s (34.3MB/s), 6430KiB/s-9.99MiB/s (6584kB/s-10.5MB/s), io=32.7MiB (34.3MB), run=1001-1001msec 00:10:09.124 WRITE: bw=36.9MiB/s (38.7MB/s), 8184KiB/s-10.9MiB/s (8380kB/s-11.5MB/s), io=36.9MiB (38.7MB), run=1001-1001msec 00:10:09.124 00:10:09.124 Disk stats (read/write): 00:10:09.124 nvme0n1: ios=2267/2560, merge=0/0, ticks=442/357, in_queue=799, util=86.36% 00:10:09.124 nvme0n2: ios=1536/1645, merge=0/0, ticks=466/335, in_queue=801, util=87.14% 00:10:09.124 nvme0n3: ios=1476/1536, merge=0/0, ticks=447/364, in_queue=811, util=89.16% 00:10:09.124 nvme0n4: ios=2048/2376, merge=0/0, ticks=397/364, in_queue=761, util=88.98% 00:10:09.124 08:21:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:09.124 [global] 00:10:09.124 thread=1 00:10:09.124 invalidate=1 00:10:09.124 rw=randwrite 00:10:09.124 time_based=1 00:10:09.124 runtime=1 00:10:09.124 ioengine=libaio 00:10:09.124 direct=1 00:10:09.124 bs=4096 00:10:09.124 iodepth=1 00:10:09.124 norandommap=0 00:10:09.124 numjobs=1 00:10:09.124 00:10:09.124 verify_dump=1 00:10:09.124 verify_backlog=512 00:10:09.124 verify_state_save=0 00:10:09.124 do_verify=1 00:10:09.124 verify=crc32c-intel 00:10:09.124 [job0] 00:10:09.124 filename=/dev/nvme0n1 00:10:09.124 [job1] 00:10:09.124 filename=/dev/nvme0n2 00:10:09.124 [job2] 00:10:09.124 filename=/dev/nvme0n3 00:10:09.124 [job3] 00:10:09.124 filename=/dev/nvme0n4 00:10:09.124 Could not set queue depth (nvme0n1) 00:10:09.124 Could not set queue depth (nvme0n2) 00:10:09.124 Could not set queue depth (nvme0n3) 00:10:09.124 Could not set queue depth (nvme0n4) 00:10:09.124 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.124 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.124 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.124 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.124 fio-3.35 00:10:09.124 Starting 4 threads 00:10:10.515 00:10:10.515 job0: (groupid=0, jobs=1): err= 0: pid=66847: Tue Oct 15 08:21:11 2024 00:10:10.515 read: IOPS=1599, BW=6398KiB/s (6551kB/s)(6404KiB/1001msec) 00:10:10.515 slat (nsec): min=13481, max=48269, avg=16652.13, stdev=2856.64 00:10:10.515 clat (usec): min=160, max=739, avg=299.84, stdev=59.36 00:10:10.515 lat (usec): min=176, max=763, avg=316.49, stdev=60.26 00:10:10.515 clat percentiles (usec): 00:10:10.515 | 1.00th=[ 251], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 273], 00:10:10.515 | 30.00th=[ 277], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 289], 00:10:10.515 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 334], 95.00th=[ 453], 00:10:10.515 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 619], 99.95th=[ 742], 00:10:10.515 | 99.99th=[ 742] 00:10:10.515 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:10.515 slat (usec): min=17, max=119, avg=26.01, stdev= 8.28 00:10:10.515 clat (usec): min=95, max=1773, avg=211.14, stdev=58.41 00:10:10.515 lat (usec): min=122, max=1794, avg=237.15, stdev=61.29 00:10:10.515 clat percentiles (usec): 00:10:10.515 | 1.00th=[ 111], 5.00th=[ 127], 10.00th=[ 155], 20.00th=[ 190], 00:10:10.515 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:10:10.515 | 70.00th=[ 219], 80.00th=[ 235], 90.00th=[ 269], 95.00th=[ 289], 00:10:10.515 | 99.00th=[ 338], 99.50th=[ 359], 99.90th=[ 775], 99.95th=[ 807], 00:10:10.515 | 99.99th=[ 1778] 00:10:10.515 bw ( KiB/s): min= 8192, max= 8192, per=20.38%, avg=8192.00, stdev= 0.00, samples=1 00:10:10.515 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:10.515 lat (usec) : 100=0.03%, 250=48.51%, 500=49.63%, 750=1.75%, 1000=0.05% 00:10:10.515 lat (msec) : 2=0.03% 00:10:10.515 cpu : usr=2.10%, sys=5.70%, ctx=3651, majf=0, minf=11 00:10:10.515 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.515 issued rwts: total=1601,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.515 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.515 job1: (groupid=0, jobs=1): err= 0: pid=66848: Tue Oct 15 08:21:11 2024 00:10:10.515 read: IOPS=2961, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1001msec) 00:10:10.515 slat (nsec): min=12151, max=53694, avg=15413.62, stdev=4134.96 00:10:10.515 clat (usec): min=143, max=606, avg=168.32, stdev=16.91 00:10:10.515 lat (usec): min=156, max=625, avg=183.73, stdev=18.36 00:10:10.515 clat percentiles (usec): 00:10:10.515 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:10:10.515 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:10:10.515 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 190], 00:10:10.515 | 99.00th=[ 202], 99.50th=[ 217], 99.90th=[ 437], 99.95th=[ 441], 00:10:10.515 | 99.99th=[ 611] 00:10:10.515 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:10.515 slat (usec): min=14, max=123, avg=21.48, stdev= 4.95 00:10:10.515 clat (usec): min=91, max=251, avg=123.29, stdev=12.12 00:10:10.515 lat (usec): min=110, max=374, avg=144.77, stdev=13.79 00:10:10.515 clat percentiles (usec): 00:10:10.515 | 1.00th=[ 99], 5.00th=[ 105], 10.00th=[ 110], 20.00th=[ 114], 00:10:10.515 | 30.00th=[ 117], 40.00th=[ 120], 50.00th=[ 123], 60.00th=[ 126], 00:10:10.515 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 145], 00:10:10.515 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 239], 00:10:10.515 | 99.99th=[ 251] 00:10:10.515 bw ( KiB/s): min=12288, max=12288, per=30.56%, avg=12288.00, stdev= 0.00, samples=1 00:10:10.515 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:10.515 lat (usec) : 100=0.96%, 250=98.91%, 500=0.12%, 750=0.02% 00:10:10.515 cpu : usr=2.60%, sys=8.30%, ctx=6036, majf=0, minf=7 00:10:10.515 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.515 issued rwts: total=2964,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.515 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.515 job2: (groupid=0, jobs=1): err= 0: pid=66849: Tue Oct 15 08:21:11 2024 00:10:10.515 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:10.515 slat (nsec): min=12433, max=61087, avg=16293.43, stdev=4209.03 00:10:10.515 clat (usec): min=154, max=2612, avg=189.02, stdev=66.15 00:10:10.515 lat (usec): min=169, max=2629, avg=205.31, stdev=66.40 00:10:10.515 clat percentiles (usec): 00:10:10.515 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:10:10.515 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 190], 00:10:10.515 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 210], 00:10:10.515 | 99.00th=[ 225], 99.50th=[ 229], 99.90th=[ 775], 99.95th=[ 2278], 00:10:10.515 | 99.99th=[ 2606] 00:10:10.515 write: IOPS=2890, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1001msec); 0 zone resets 00:10:10.515 slat (nsec): min=15185, max=83704, avg=23407.14, stdev=5632.66 00:10:10.515 clat (usec): min=108, max=423, avg=137.13, stdev=12.69 00:10:10.515 lat (usec): min=129, max=443, avg=160.54, stdev=14.11 00:10:10.515 clat percentiles (usec): 00:10:10.515 | 1.00th=[ 116], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 128], 00:10:10.515 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:10:10.515 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 157], 00:10:10.515 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 204], 99.95th=[ 255], 00:10:10.515 | 99.99th=[ 424] 00:10:10.515 bw ( KiB/s): min=12288, max=12288, per=30.56%, avg=12288.00, stdev= 0.00, samples=1 00:10:10.515 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:10.515 lat (usec) : 250=99.85%, 500=0.09%, 1000=0.02% 00:10:10.515 lat (msec) : 4=0.04% 00:10:10.515 cpu : usr=2.70%, sys=7.90%, ctx=5453, majf=0, minf=10 00:10:10.515 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.515 issued rwts: total=2560,2893,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.515 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.515 job3: (groupid=0, jobs=1): err= 0: pid=66850: Tue Oct 15 08:21:11 2024 00:10:10.515 read: IOPS=1579, BW=6318KiB/s (6469kB/s)(6324KiB/1001msec) 00:10:10.515 slat (nsec): min=13276, max=82067, avg=17083.23, stdev=3363.44 00:10:10.515 clat (usec): min=203, max=683, avg=288.97, stdev=35.31 00:10:10.516 lat (usec): min=221, max=713, avg=306.06, stdev=35.89 00:10:10.516 clat percentiles (usec): 00:10:10.516 | 1.00th=[ 247], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 269], 00:10:10.516 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:10:10.516 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 359], 00:10:10.516 | 99.00th=[ 474], 99.50th=[ 494], 99.90th=[ 529], 99.95th=[ 685], 00:10:10.516 | 99.99th=[ 685] 00:10:10.516 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:10.516 slat (usec): min=19, max=124, avg=26.66, stdev= 9.42 00:10:10.516 clat (usec): min=110, max=2634, avg=221.77, stdev=91.10 00:10:10.516 lat (usec): min=141, max=2699, avg=248.42, stdev=95.36 00:10:10.516 clat percentiles (usec): 00:10:10.516 | 1.00th=[ 129], 5.00th=[ 143], 10.00th=[ 167], 20.00th=[ 192], 00:10:10.516 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 215], 00:10:10.516 | 70.00th=[ 221], 80.00th=[ 243], 90.00th=[ 289], 95.00th=[ 314], 00:10:10.516 | 99.00th=[ 429], 99.50th=[ 478], 99.90th=[ 1205], 99.95th=[ 2311], 00:10:10.516 | 99.99th=[ 2638] 00:10:10.516 bw ( KiB/s): min= 8192, max= 8192, per=20.38%, avg=8192.00, stdev= 0.00, samples=1 00:10:10.516 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:10.516 lat (usec) : 250=46.24%, 500=53.46%, 750=0.19%, 1000=0.03% 00:10:10.516 lat (msec) : 2=0.03%, 4=0.06% 00:10:10.516 cpu : usr=2.40%, sys=5.40%, ctx=3632, majf=0, minf=21 00:10:10.516 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.516 issued rwts: total=1581,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.516 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.516 00:10:10.516 Run status group 0 (all jobs): 00:10:10.516 READ: bw=34.0MiB/s (35.6MB/s), 6318KiB/s-11.6MiB/s (6469kB/s-12.1MB/s), io=34.0MiB (35.7MB), run=1001-1001msec 00:10:10.516 WRITE: bw=39.3MiB/s (41.2MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=39.3MiB (41.2MB), run=1001-1001msec 00:10:10.516 00:10:10.516 Disk stats (read/write): 00:10:10.516 nvme0n1: ios=1568/1536, merge=0/0, ticks=484/347, in_queue=831, util=87.27% 00:10:10.516 nvme0n2: ios=2599/2679, merge=0/0, ticks=470/349, in_queue=819, util=88.35% 00:10:10.516 nvme0n3: ios=2110/2560, merge=0/0, ticks=406/376, in_queue=782, util=89.18% 00:10:10.516 nvme0n4: ios=1489/1536, merge=0/0, ticks=436/367, in_queue=803, util=89.74% 00:10:10.516 08:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:10.516 [global] 00:10:10.516 thread=1 00:10:10.516 invalidate=1 00:10:10.516 rw=write 00:10:10.516 time_based=1 00:10:10.516 runtime=1 00:10:10.516 ioengine=libaio 00:10:10.516 direct=1 00:10:10.516 bs=4096 00:10:10.516 iodepth=128 00:10:10.516 norandommap=0 00:10:10.516 numjobs=1 00:10:10.516 00:10:10.516 verify_dump=1 00:10:10.516 verify_backlog=512 00:10:10.516 verify_state_save=0 00:10:10.516 do_verify=1 00:10:10.516 verify=crc32c-intel 00:10:10.516 [job0] 00:10:10.516 filename=/dev/nvme0n1 00:10:10.516 [job1] 00:10:10.516 filename=/dev/nvme0n2 00:10:10.516 [job2] 00:10:10.516 filename=/dev/nvme0n3 00:10:10.516 [job3] 00:10:10.516 filename=/dev/nvme0n4 00:10:10.516 Could not set queue depth (nvme0n1) 00:10:10.516 Could not set queue depth (nvme0n2) 00:10:10.516 Could not set queue depth (nvme0n3) 00:10:10.516 Could not set queue depth (nvme0n4) 00:10:10.516 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.516 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.516 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.516 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.516 fio-3.35 00:10:10.516 Starting 4 threads 00:10:11.892 00:10:11.892 job0: (groupid=0, jobs=1): err= 0: pid=66911: Tue Oct 15 08:21:13 2024 00:10:11.892 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:10:11.892 slat (usec): min=6, max=6046, avg=186.17, stdev=936.61 00:10:11.892 clat (usec): min=17173, max=26311, avg=24619.02, stdev=1149.64 00:10:11.892 lat (usec): min=22401, max=26325, avg=24805.19, stdev=674.46 00:10:11.892 clat percentiles (usec): 00:10:11.892 | 1.00th=[19268], 5.00th=[22676], 10.00th=[23987], 20.00th=[24249], 00:10:11.892 | 30.00th=[24511], 40.00th=[24773], 50.00th=[24773], 60.00th=[25035], 00:10:11.892 | 70.00th=[25035], 80.00th=[25297], 90.00th=[25560], 95.00th=[25822], 00:10:11.892 | 99.00th=[26084], 99.50th=[26346], 99.90th=[26346], 99.95th=[26346], 00:10:11.892 | 99.99th=[26346] 00:10:11.892 write: IOPS=2683, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1002msec); 0 zone resets 00:10:11.892 slat (usec): min=11, max=8288, avg=184.58, stdev=858.14 00:10:11.892 clat (usec): min=417, max=28172, avg=23276.25, stdev=2615.14 00:10:11.892 lat (usec): min=5627, max=28208, avg=23460.83, stdev=2472.00 00:10:11.892 clat percentiles (usec): 00:10:11.892 | 1.00th=[ 6456], 5.00th=[19006], 10.00th=[22938], 20.00th=[23200], 00:10:11.892 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:10:11.892 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:10:11.892 | 99.00th=[27919], 99.50th=[28181], 99.90th=[28181], 99.95th=[28181], 00:10:11.892 | 99.99th=[28181] 00:10:11.892 bw ( KiB/s): min=11807, max=11807, per=18.80%, avg=11807.00, stdev= 0.00, samples=1 00:10:11.892 iops : min= 2951, max= 2951, avg=2951.00, stdev= 0.00, samples=1 00:10:11.892 lat (usec) : 500=0.02% 00:10:11.892 lat (msec) : 10=0.61%, 20=4.00%, 50=95.37% 00:10:11.892 cpu : usr=4.30%, sys=8.19%, ctx=165, majf=0, minf=19 00:10:11.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:11.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.892 issued rwts: total=2560,2689,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.892 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.892 job1: (groupid=0, jobs=1): err= 0: pid=66912: Tue Oct 15 08:21:13 2024 00:10:11.892 read: IOPS=5530, BW=21.6MiB/s (22.7MB/s)(21.7MiB/1003msec) 00:10:11.892 slat (usec): min=8, max=4324, avg=88.01, stdev=378.44 00:10:11.892 clat (usec): min=645, max=16213, avg=11715.90, stdev=1183.90 00:10:11.892 lat (usec): min=2187, max=16378, avg=11803.91, stdev=1189.67 00:10:11.892 clat percentiles (usec): 00:10:11.892 | 1.00th=[ 6259], 5.00th=[10159], 10.00th=[10683], 20.00th=[11338], 00:10:11.892 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:10:11.892 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12518], 95.00th=[12780], 00:10:11.892 | 99.00th=[14484], 99.50th=[15008], 99.90th=[15139], 99.95th=[15401], 00:10:11.892 | 99.99th=[16188] 00:10:11.892 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:10:11.892 slat (usec): min=12, max=4633, avg=82.73, stdev=443.95 00:10:11.892 clat (usec): min=6304, max=16184, avg=10981.36, stdev=936.73 00:10:11.892 lat (usec): min=6344, max=16231, avg=11064.08, stdev=1024.35 00:10:11.892 clat percentiles (usec): 00:10:11.892 | 1.00th=[ 8160], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10552], 00:10:11.892 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:10:11.892 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12256], 00:10:11.892 | 99.00th=[14484], 99.50th=[14746], 99.90th=[15533], 99.95th=[15795], 00:10:11.892 | 99.99th=[16188] 00:10:11.892 bw ( KiB/s): min=21309, max=23704, per=35.84%, avg=22506.50, stdev=1693.52, samples=2 00:10:11.892 iops : min= 5327, max= 5926, avg=5626.50, stdev=423.56, samples=2 00:10:11.892 lat (usec) : 750=0.01% 00:10:11.892 lat (msec) : 4=0.20%, 10=5.96%, 20=93.84% 00:10:11.892 cpu : usr=4.69%, sys=16.67%, ctx=344, majf=0, minf=6 00:10:11.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:11.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.892 issued rwts: total=5547,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.892 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.892 job2: (groupid=0, jobs=1): err= 0: pid=66913: Tue Oct 15 08:21:13 2024 00:10:11.892 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:10:11.892 slat (usec): min=9, max=8648, avg=191.88, stdev=966.97 00:10:11.892 clat (usec): min=16857, max=28169, avg=24572.47, stdev=1457.64 00:10:11.892 lat (usec): min=21865, max=28188, avg=24764.35, stdev=1138.01 00:10:11.892 clat percentiles (usec): 00:10:11.892 | 1.00th=[19268], 5.00th=[22152], 10.00th=[22414], 20.00th=[24249], 00:10:11.892 | 30.00th=[24511], 40.00th=[24773], 50.00th=[24773], 60.00th=[25035], 00:10:11.892 | 70.00th=[25035], 80.00th=[25297], 90.00th=[25822], 95.00th=[26870], 00:10:11.892 | 99.00th=[28181], 99.50th=[28181], 99.90th=[28181], 99.95th=[28181], 00:10:11.892 | 99.99th=[28181] 00:10:11.892 write: IOPS=2683, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1002msec); 0 zone resets 00:10:11.892 slat (usec): min=16, max=6103, avg=179.39, stdev=830.25 00:10:11.892 clat (usec): min=159, max=28678, avg=23562.53, stdev=2834.99 00:10:11.892 lat (usec): min=6171, max=28723, avg=23741.92, stdev=2697.93 00:10:11.892 clat percentiles (usec): 00:10:11.892 | 1.00th=[ 6849], 5.00th=[19006], 10.00th=[21890], 20.00th=[23200], 00:10:11.892 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:10:11.892 | 70.00th=[24249], 80.00th=[24511], 90.00th=[26346], 95.00th=[26870], 00:10:11.892 | 99.00th=[28443], 99.50th=[28443], 99.90th=[28705], 99.95th=[28705], 00:10:11.892 | 99.99th=[28705] 00:10:11.892 bw ( KiB/s): min= 8431, max=12040, per=16.30%, avg=10235.50, stdev=2551.95, samples=2 00:10:11.892 iops : min= 2107, max= 3010, avg=2558.50, stdev=638.52, samples=2 00:10:11.892 lat (usec) : 250=0.02% 00:10:11.892 lat (msec) : 10=0.61%, 20=3.83%, 50=95.54% 00:10:11.892 cpu : usr=3.30%, sys=9.49%, ctx=166, majf=0, minf=17 00:10:11.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:11.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.892 issued rwts: total=2560,2689,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.892 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.892 job3: (groupid=0, jobs=1): err= 0: pid=66914: Tue Oct 15 08:21:13 2024 00:10:11.892 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:10:11.892 slat (usec): min=5, max=3409, avg=103.91, stdev=455.45 00:10:11.892 clat (usec): min=10152, max=17064, avg=13921.61, stdev=1102.32 00:10:11.892 lat (usec): min=10458, max=18658, avg=14025.53, stdev=1018.70 00:10:11.892 clat percentiles (usec): 00:10:11.892 | 1.00th=[10945], 5.00th=[12649], 10.00th=[12911], 20.00th=[13173], 00:10:11.892 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13698], 60.00th=[13829], 00:10:11.892 | 70.00th=[14091], 80.00th=[14353], 90.00th=[15795], 95.00th=[16188], 00:10:11.892 | 99.00th=[16909], 99.50th=[16909], 99.90th=[16909], 99.95th=[17171], 00:10:11.892 | 99.99th=[17171] 00:10:11.892 write: IOPS=4733, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1001msec); 0 zone resets 00:10:11.892 slat (usec): min=14, max=4137, avg=101.30, stdev=426.29 00:10:11.892 clat (usec): min=322, max=20402, avg=13128.23, stdev=1752.17 00:10:11.892 lat (usec): min=3541, max=20450, avg=13229.53, stdev=1719.06 00:10:11.892 clat percentiles (usec): 00:10:11.892 | 1.00th=[ 7570], 5.00th=[11600], 10.00th=[12125], 20.00th=[12387], 00:10:11.892 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12911], 60.00th=[13042], 00:10:11.892 | 70.00th=[13173], 80.00th=[13698], 90.00th=[15008], 95.00th=[15926], 00:10:11.892 | 99.00th=[19792], 99.50th=[20055], 99.90th=[20317], 99.95th=[20317], 00:10:11.892 | 99.99th=[20317] 00:10:11.892 bw ( KiB/s): min=20521, max=20521, per=32.67%, avg=20521.00, stdev= 0.00, samples=1 00:10:11.892 iops : min= 5130, max= 5130, avg=5130.00, stdev= 0.00, samples=1 00:10:11.892 lat (usec) : 500=0.01% 00:10:11.892 lat (msec) : 4=0.20%, 10=0.65%, 20=98.80%, 50=0.33% 00:10:11.892 cpu : usr=4.60%, sys=14.20%, ctx=427, majf=0, minf=5 00:10:11.892 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:11.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.892 issued rwts: total=4608,4738,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.892 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.892 00:10:11.892 Run status group 0 (all jobs): 00:10:11.892 READ: bw=59.5MiB/s (62.4MB/s), 9.98MiB/s-21.6MiB/s (10.5MB/s-22.7MB/s), io=59.7MiB (62.6MB), run=1001-1003msec 00:10:11.892 WRITE: bw=61.3MiB/s (64.3MB/s), 10.5MiB/s-21.9MiB/s (11.0MB/s-23.0MB/s), io=61.5MiB (64.5MB), run=1001-1003msec 00:10:11.892 00:10:11.892 Disk stats (read/write): 00:10:11.893 nvme0n1: ios=2098/2496, merge=0/0, ticks=11702/13457, in_queue=25159, util=89.47% 00:10:11.893 nvme0n2: ios=4657/5109, merge=0/0, ticks=26131/22683, in_queue=48814, util=89.70% 00:10:11.893 nvme0n3: ios=2078/2496, merge=0/0, ticks=12097/13001, in_queue=25098, util=90.05% 00:10:11.893 nvme0n4: ios=4132/4225, merge=0/0, ticks=12723/11523, in_queue=24246, util=90.73% 00:10:11.893 08:21:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:11.893 [global] 00:10:11.893 thread=1 00:10:11.893 invalidate=1 00:10:11.893 rw=randwrite 00:10:11.893 time_based=1 00:10:11.893 runtime=1 00:10:11.893 ioengine=libaio 00:10:11.893 direct=1 00:10:11.893 bs=4096 00:10:11.893 iodepth=128 00:10:11.893 norandommap=0 00:10:11.893 numjobs=1 00:10:11.893 00:10:11.893 verify_dump=1 00:10:11.893 verify_backlog=512 00:10:11.893 verify_state_save=0 00:10:11.893 do_verify=1 00:10:11.893 verify=crc32c-intel 00:10:11.893 [job0] 00:10:11.893 filename=/dev/nvme0n1 00:10:11.893 [job1] 00:10:11.893 filename=/dev/nvme0n2 00:10:11.893 [job2] 00:10:11.893 filename=/dev/nvme0n3 00:10:11.893 [job3] 00:10:11.893 filename=/dev/nvme0n4 00:10:11.893 Could not set queue depth (nvme0n1) 00:10:11.893 Could not set queue depth (nvme0n2) 00:10:11.893 Could not set queue depth (nvme0n3) 00:10:11.893 Could not set queue depth (nvme0n4) 00:10:11.893 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:11.893 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:11.893 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:11.893 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:11.893 fio-3.35 00:10:11.893 Starting 4 threads 00:10:13.272 00:10:13.272 job0: (groupid=0, jobs=1): err= 0: pid=66971: Tue Oct 15 08:21:14 2024 00:10:13.272 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:10:13.272 slat (usec): min=8, max=8916, avg=153.60, stdev=703.15 00:10:13.272 clat (usec): min=9666, max=40187, avg=19214.87, stdev=4272.54 00:10:13.273 lat (usec): min=9690, max=40211, avg=19368.47, stdev=4308.93 00:10:13.273 clat percentiles (usec): 00:10:13.273 | 1.00th=[11469], 5.00th=[13566], 10.00th=[14222], 20.00th=[16319], 00:10:13.273 | 30.00th=[17695], 40.00th=[18220], 50.00th=[19006], 60.00th=[19268], 00:10:13.273 | 70.00th=[19792], 80.00th=[21365], 90.00th=[23462], 95.00th=[26608], 00:10:13.273 | 99.00th=[36963], 99.50th=[36963], 99.90th=[36963], 99.95th=[37487], 00:10:13.273 | 99.99th=[40109] 00:10:13.273 write: IOPS=3266, BW=12.8MiB/s (13.4MB/s)(12.8MiB/1005msec); 0 zone resets 00:10:13.273 slat (usec): min=15, max=10897, avg=152.60, stdev=713.68 00:10:13.273 clat (usec): min=538, max=54690, avg=20583.42, stdev=10555.69 00:10:13.273 lat (usec): min=5715, max=54715, avg=20736.03, stdev=10633.94 00:10:13.273 clat percentiles (usec): 00:10:13.273 | 1.00th=[ 6325], 5.00th=[11076], 10.00th=[11731], 20.00th=[14091], 00:10:13.273 | 30.00th=[14746], 40.00th=[15401], 50.00th=[17171], 60.00th=[19006], 00:10:13.273 | 70.00th=[20055], 80.00th=[23200], 90.00th=[39060], 95.00th=[46400], 00:10:13.273 | 99.00th=[53216], 99.50th=[53740], 99.90th=[54789], 99.95th=[54789], 00:10:13.273 | 99.99th=[54789] 00:10:13.273 bw ( KiB/s): min=10080, max=15190, per=24.73%, avg=12635.00, stdev=3613.32, samples=2 00:10:13.273 iops : min= 2520, max= 3797, avg=3158.50, stdev=902.98, samples=2 00:10:13.273 lat (usec) : 750=0.02% 00:10:13.273 lat (msec) : 10=1.56%, 20=70.59%, 50=26.51%, 100=1.32% 00:10:13.273 cpu : usr=3.88%, sys=9.96%, ctx=328, majf=0, minf=9 00:10:13.274 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:13.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.274 issued rwts: total=3072,3283,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.274 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.274 job1: (groupid=0, jobs=1): err= 0: pid=66972: Tue Oct 15 08:21:14 2024 00:10:13.274 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:10:13.274 slat (usec): min=8, max=11451, avg=187.31, stdev=852.42 00:10:13.274 clat (usec): min=11045, max=68726, avg=23691.12, stdev=13820.45 00:10:13.274 lat (usec): min=11067, max=68785, avg=23878.43, stdev=13942.84 00:10:13.274 clat percentiles (usec): 00:10:13.274 | 1.00th=[11994], 5.00th=[13173], 10.00th=[13304], 20.00th=[13698], 00:10:13.274 | 30.00th=[15401], 40.00th=[18482], 50.00th=[19006], 60.00th=[19268], 00:10:13.274 | 70.00th=[19792], 80.00th=[30016], 90.00th=[49546], 95.00th=[55837], 00:10:13.274 | 99.00th=[62653], 99.50th=[66323], 99.90th=[68682], 99.95th=[68682], 00:10:13.274 | 99.99th=[68682] 00:10:13.274 write: IOPS=2886, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1004msec); 0 zone resets 00:10:13.274 slat (usec): min=8, max=10642, avg=171.47, stdev=792.74 00:10:13.274 clat (usec): min=106, max=67399, avg=22214.11, stdev=14575.40 00:10:13.275 lat (usec): min=3780, max=67435, avg=22385.58, stdev=14666.93 00:10:13.275 clat percentiles (usec): 00:10:13.275 | 1.00th=[ 4359], 5.00th=[10159], 10.00th=[10683], 20.00th=[11076], 00:10:13.275 | 30.00th=[11338], 40.00th=[13042], 50.00th=[19006], 60.00th=[20055], 00:10:13.275 | 70.00th=[22938], 80.00th=[32900], 90.00th=[47973], 95.00th=[55837], 00:10:13.275 | 99.00th=[64226], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:10:13.275 | 99.99th=[67634] 00:10:13.275 bw ( KiB/s): min= 7264, max=14896, per=21.69%, avg=11080.00, stdev=5396.64, samples=2 00:10:13.275 iops : min= 1816, max= 3724, avg=2770.00, stdev=1349.16, samples=2 00:10:13.275 lat (usec) : 250=0.02% 00:10:13.275 lat (msec) : 4=0.20%, 10=2.36%, 20=62.48%, 50=26.51%, 100=8.43% 00:10:13.276 cpu : usr=2.49%, sys=9.87%, ctx=380, majf=0, minf=20 00:10:13.276 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:13.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.276 issued rwts: total=2560,2898,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.276 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.276 job2: (groupid=0, jobs=1): err= 0: pid=66973: Tue Oct 15 08:21:14 2024 00:10:13.276 read: IOPS=2090, BW=8362KiB/s (8563kB/s)(8404KiB/1005msec) 00:10:13.276 slat (usec): min=4, max=11007, avg=244.64, stdev=995.75 00:10:13.276 clat (usec): min=2386, max=77561, avg=31470.52, stdev=13482.95 00:10:13.276 lat (usec): min=8996, max=78708, avg=31715.16, stdev=13556.69 00:10:13.276 clat percentiles (usec): 00:10:13.276 | 1.00th=[15795], 5.00th=[17695], 10.00th=[19530], 20.00th=[22152], 00:10:13.277 | 30.00th=[23462], 40.00th=[25035], 50.00th=[26870], 60.00th=[30278], 00:10:13.277 | 70.00th=[32113], 80.00th=[40633], 90.00th=[49021], 95.00th=[65274], 00:10:13.277 | 99.00th=[73925], 99.50th=[76022], 99.90th=[77071], 99.95th=[77071], 00:10:13.277 | 99.99th=[77071] 00:10:13.277 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:10:13.277 slat (usec): min=5, max=11366, avg=181.96, stdev=864.52 00:10:13.277 clat (usec): min=9033, max=46681, avg=23538.44, stdev=7393.40 00:10:13.277 lat (usec): min=9071, max=47363, avg=23720.40, stdev=7441.10 00:10:13.277 clat percentiles (usec): 00:10:13.277 | 1.00th=[10290], 5.00th=[13566], 10.00th=[15926], 20.00th=[16909], 00:10:13.277 | 30.00th=[19792], 40.00th=[20841], 50.00th=[21890], 60.00th=[22938], 00:10:13.277 | 70.00th=[27132], 80.00th=[29230], 90.00th=[33817], 95.00th=[38536], 00:10:13.277 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:10:13.277 | 99.99th=[46924] 00:10:13.277 bw ( KiB/s): min= 8192, max=11688, per=19.45%, avg=9940.00, stdev=2472.05, samples=2 00:10:13.277 iops : min= 2048, max= 2922, avg=2485.00, stdev=618.01, samples=2 00:10:13.277 lat (msec) : 4=0.02%, 10=0.49%, 20=22.46%, 50=72.52%, 100=4.51% 00:10:13.277 cpu : usr=2.39%, sys=8.37%, ctx=564, majf=0, minf=9 00:10:13.277 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:10:13.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.277 issued rwts: total=2101,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.278 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.278 job3: (groupid=0, jobs=1): err= 0: pid=66974: Tue Oct 15 08:21:14 2024 00:10:13.278 read: IOPS=4020, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1004msec) 00:10:13.278 slat (usec): min=8, max=10022, avg=136.37, stdev=678.12 00:10:13.278 clat (usec): min=2102, max=39657, avg=17806.02, stdev=7485.23 00:10:13.278 lat (usec): min=7193, max=39672, avg=17942.39, stdev=7544.73 00:10:13.278 clat percentiles (usec): 00:10:13.278 | 1.00th=[ 7701], 5.00th=[10552], 10.00th=[11076], 20.00th=[11207], 00:10:13.278 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12780], 60.00th=[20579], 00:10:13.278 | 70.00th=[22938], 80.00th=[24249], 90.00th=[29230], 95.00th=[31065], 00:10:13.278 | 99.00th=[34866], 99.50th=[36439], 99.90th=[39584], 99.95th=[39584], 00:10:13.278 | 99.99th=[39584] 00:10:13.278 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:10:13.278 slat (usec): min=5, max=8328, avg=101.73, stdev=516.14 00:10:13.278 clat (usec): min=6031, max=31929, avg=13487.25, stdev=4495.59 00:10:13.278 lat (usec): min=7897, max=32574, avg=13588.98, stdev=4508.18 00:10:13.278 clat percentiles (usec): 00:10:13.278 | 1.00th=[ 7963], 5.00th=[ 9765], 10.00th=[ 9896], 20.00th=[10552], 00:10:13.278 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[12518], 00:10:13.278 | 70.00th=[15139], 80.00th=[16712], 90.00th=[19268], 95.00th=[23200], 00:10:13.278 | 99.00th=[28443], 99.50th=[28967], 99.90th=[31851], 99.95th=[31851], 00:10:13.278 | 99.99th=[31851] 00:10:13.278 bw ( KiB/s): min=12288, max=20521, per=32.11%, avg=16404.50, stdev=5821.61, samples=2 00:10:13.278 iops : min= 3072, max= 5130, avg=4101.00, stdev=1455.23, samples=2 00:10:13.278 lat (msec) : 4=0.01%, 10=7.44%, 20=67.15%, 50=25.40% 00:10:13.278 cpu : usr=2.79%, sys=12.86%, ctx=563, majf=0, minf=12 00:10:13.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:13.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.278 issued rwts: total=4037,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.278 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.279 00:10:13.279 Run status group 0 (all jobs): 00:10:13.279 READ: bw=45.7MiB/s (48.0MB/s), 8362KiB/s-15.7MiB/s (8563kB/s-16.5MB/s), io=46.0MiB (48.2MB), run=1004-1005msec 00:10:13.279 WRITE: bw=49.9MiB/s (52.3MB/s), 9.95MiB/s-15.9MiB/s (10.4MB/s-16.7MB/s), io=50.1MiB (52.6MB), run=1004-1005msec 00:10:13.279 00:10:13.279 Disk stats (read/write): 00:10:13.279 nvme0n1: ios=2610/2919, merge=0/0, ticks=24570/25456, in_queue=50026, util=88.08% 00:10:13.279 nvme0n2: ios=2095/2302, merge=0/0, ticks=16790/16952, in_queue=33742, util=87.77% 00:10:13.279 nvme0n3: ios=1722/2048, merge=0/0, ticks=23170/20240, in_queue=43410, util=88.25% 00:10:13.279 nvme0n4: ios=3584/3799, merge=0/0, ticks=40337/34897, in_queue=75234, util=89.51% 00:10:13.279 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:13.279 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:13.279 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66988 00:10:13.279 08:21:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:13.279 [global] 00:10:13.279 thread=1 00:10:13.279 invalidate=1 00:10:13.279 rw=read 00:10:13.279 time_based=1 00:10:13.279 runtime=10 00:10:13.279 ioengine=libaio 00:10:13.279 direct=1 00:10:13.279 bs=4096 00:10:13.279 iodepth=1 00:10:13.279 norandommap=1 00:10:13.279 numjobs=1 00:10:13.280 00:10:13.280 [job0] 00:10:13.280 filename=/dev/nvme0n1 00:10:13.280 [job1] 00:10:13.280 filename=/dev/nvme0n2 00:10:13.280 [job2] 00:10:13.280 filename=/dev/nvme0n3 00:10:13.280 [job3] 00:10:13.280 filename=/dev/nvme0n4 00:10:13.280 Could not set queue depth (nvme0n1) 00:10:13.280 Could not set queue depth (nvme0n2) 00:10:13.280 Could not set queue depth (nvme0n3) 00:10:13.280 Could not set queue depth (nvme0n4) 00:10:13.280 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.280 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.280 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.280 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.280 fio-3.35 00:10:13.280 Starting 4 threads 00:10:16.565 08:21:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:16.565 fio: pid=67031, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:16.565 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=40218624, buflen=4096 00:10:16.565 08:21:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:16.565 fio: pid=67030, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:16.565 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=68026368, buflen=4096 00:10:16.823 08:21:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:16.823 08:21:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:16.823 fio: pid=67028, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:16.823 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=9052160, buflen=4096 00:10:17.082 08:21:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.082 08:21:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:17.340 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=56217600, buflen=4096 00:10:17.340 fio: pid=67029, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:17.340 00:10:17.340 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67028: Tue Oct 15 08:21:18 2024 00:10:17.340 read: IOPS=5388, BW=21.0MiB/s (22.1MB/s)(72.6MiB/3451msec) 00:10:17.340 slat (usec): min=9, max=15844, avg=16.81, stdev=199.24 00:10:17.340 clat (usec): min=137, max=2219, avg=167.52, stdev=39.71 00:10:17.340 lat (usec): min=150, max=16097, avg=184.34, stdev=203.65 00:10:17.340 clat percentiles (usec): 00:10:17.340 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:10:17.341 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:10:17.341 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 192], 00:10:17.341 | 99.00th=[ 253], 99.50th=[ 260], 99.90th=[ 392], 99.95th=[ 1029], 00:10:17.341 | 99.99th=[ 2008] 00:10:17.341 bw ( KiB/s): min=21240, max=22424, per=35.12%, avg=22042.17, stdev=574.61, samples=6 00:10:17.341 iops : min= 5310, max= 5606, avg=5510.50, stdev=143.71, samples=6 00:10:17.341 lat (usec) : 250=98.83%, 500=1.09%, 750=0.01%, 1000=0.01% 00:10:17.341 lat (msec) : 2=0.05%, 4=0.01% 00:10:17.341 cpu : usr=1.51%, sys=6.67%, ctx=18600, majf=0, minf=1 00:10:17.341 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.341 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.341 issued rwts: total=18595,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.341 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.341 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67029: Tue Oct 15 08:21:18 2024 00:10:17.341 read: IOPS=3666, BW=14.3MiB/s (15.0MB/s)(53.6MiB/3744msec) 00:10:17.341 slat (usec): min=10, max=15725, avg=19.86, stdev=209.32 00:10:17.341 clat (usec): min=3, max=7807, avg=251.45, stdev=191.38 00:10:17.341 lat (usec): min=141, max=15984, avg=271.31, stdev=283.84 00:10:17.341 clat percentiles (usec): 00:10:17.341 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 155], 00:10:17.341 | 30.00th=[ 174], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:10:17.341 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 318], 00:10:17.341 | 99.00th=[ 424], 99.50th=[ 461], 99.90th=[ 3359], 99.95th=[ 5735], 00:10:17.341 | 99.99th=[ 7570] 00:10:17.341 bw ( KiB/s): min=12440, max=21852, per=22.48%, avg=14107.71, stdev=3434.03, samples=7 00:10:17.341 iops : min= 3110, max= 5463, avg=3526.86, stdev=858.54, samples=7 00:10:17.341 lat (usec) : 4=0.01%, 250=32.88%, 500=66.73%, 750=0.15%, 1000=0.06% 00:10:17.341 lat (msec) : 2=0.02%, 4=0.09%, 10=0.06% 00:10:17.341 cpu : usr=0.94%, sys=4.97%, ctx=13740, majf=0, minf=2 00:10:17.341 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.341 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.341 issued rwts: total=13726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.341 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.341 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67030: Tue Oct 15 08:21:18 2024 00:10:17.341 read: IOPS=5208, BW=20.3MiB/s (21.3MB/s)(64.9MiB/3189msec) 00:10:17.341 slat (usec): min=9, max=15718, avg=15.70, stdev=167.23 00:10:17.341 clat (usec): min=52, max=1854, avg=175.08, stdev=32.60 00:10:17.341 lat (usec): min=154, max=15980, avg=190.78, stdev=171.25 00:10:17.341 clat percentiles (usec): 00:10:17.341 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:10:17.341 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 174], 00:10:17.341 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 206], 00:10:17.341 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 424], 99.95th=[ 832], 00:10:17.341 | 99.99th=[ 1745] 00:10:17.341 bw ( KiB/s): min=20376, max=21656, per=33.86%, avg=21253.00, stdev=538.71, samples=6 00:10:17.341 iops : min= 5094, max= 5414, avg=5313.17, stdev=134.77, samples=6 00:10:17.341 lat (usec) : 100=0.01%, 250=98.22%, 500=1.68%, 750=0.03%, 1000=0.03% 00:10:17.341 lat (msec) : 2=0.03% 00:10:17.341 cpu : usr=1.60%, sys=6.15%, ctx=16619, majf=0, minf=1 00:10:17.341 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.341 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.341 issued rwts: total=16609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.341 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.341 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=67031: Tue Oct 15 08:21:18 2024 00:10:17.341 read: IOPS=3356, BW=13.1MiB/s (13.7MB/s)(38.4MiB/2926msec) 00:10:17.341 slat (usec): min=12, max=109, avg=15.71, stdev= 3.77 00:10:17.341 clat (usec): min=150, max=2927, avg=280.63, stdev=53.73 00:10:17.341 lat (usec): min=163, max=2967, avg=296.34, stdev=54.05 00:10:17.341 clat percentiles (usec): 00:10:17.341 | 1.00th=[ 165], 5.00th=[ 229], 10.00th=[ 258], 20.00th=[ 265], 00:10:17.341 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:10:17.341 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 318], 00:10:17.341 | 99.00th=[ 355], 99.50th=[ 416], 99.90th=[ 562], 99.95th=[ 1352], 00:10:17.341 | 99.99th=[ 2933] 00:10:17.341 bw ( KiB/s): min=13245, max=13760, per=21.42%, avg=13442.60, stdev=238.05, samples=5 00:10:17.341 iops : min= 3311, max= 3440, avg=3360.60, stdev=59.56, samples=5 00:10:17.341 lat (usec) : 250=6.47%, 500=93.36%, 750=0.10%, 1000=0.01% 00:10:17.341 lat (msec) : 2=0.03%, 4=0.02% 00:10:17.341 cpu : usr=0.99%, sys=4.75%, ctx=9833, majf=0, minf=1 00:10:17.341 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.341 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.341 issued rwts: total=9820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.341 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.341 00:10:17.341 Run status group 0 (all jobs): 00:10:17.341 READ: bw=61.3MiB/s (64.3MB/s), 13.1MiB/s-21.0MiB/s (13.7MB/s-22.1MB/s), io=229MiB (241MB), run=2926-3744msec 00:10:17.341 00:10:17.341 Disk stats (read/write): 00:10:17.341 nvme0n1: ios=18110/0, merge=0/0, ticks=3075/0, in_queue=3075, util=94.85% 00:10:17.341 nvme0n2: ios=12954/0, merge=0/0, ticks=3357/0, in_queue=3357, util=94.59% 00:10:17.341 nvme0n3: ios=16330/0, merge=0/0, ticks=2882/0, in_queue=2882, util=95.96% 00:10:17.341 nvme0n4: ios=9610/0, merge=0/0, ticks=2742/0, in_queue=2742, util=96.79% 00:10:17.341 08:21:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.341 08:21:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:17.599 08:21:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.599 08:21:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:17.857 08:21:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.857 08:21:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:18.115 08:21:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:18.115 08:21:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:18.683 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:18.683 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:18.941 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:18.941 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66988 00:10:18.941 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:18.941 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:18.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.941 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:18.941 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:18.941 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:18.941 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.941 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:18.941 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.941 nvmf hotplug test: fio failed as expected 00:10:18.941 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:18.941 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:18.941 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:18.941 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:19.199 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:19.199 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:19.199 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:19.199 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:19.199 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:19.199 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:19.199 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:19.199 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:19.199 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:19.199 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:19.199 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:19.199 rmmod nvme_tcp 00:10:19.199 rmmod nvme_fabrics 00:10:19.199 rmmod nvme_keyring 00:10:19.199 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:19.457 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:19.457 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:19.457 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 66598 ']' 00:10:19.457 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 66598 00:10:19.457 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 66598 ']' 00:10:19.457 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 66598 00:10:19.457 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:19.457 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:19.457 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66598 00:10:19.457 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:19.457 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:19.457 killing process with pid 66598 00:10:19.457 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66598' 00:10:19.457 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 66598 00:10:19.457 08:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 66598 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.761 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:10:20.022 00:10:20.022 real 0m20.671s 00:10:20.022 user 1m17.260s 00:10:20.022 sys 0m10.890s 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.022 ************************************ 00:10:20.022 END TEST nvmf_fio_target 00:10:20.022 ************************************ 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:20.022 ************************************ 00:10:20.022 START TEST nvmf_bdevio 00:10:20.022 ************************************ 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:20.022 * Looking for test storage... 00:10:20.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.022 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:20.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.022 --rc genhtml_branch_coverage=1 00:10:20.022 --rc genhtml_function_coverage=1 00:10:20.022 --rc genhtml_legend=1 00:10:20.022 --rc geninfo_all_blocks=1 00:10:20.022 --rc geninfo_unexecuted_blocks=1 00:10:20.023 00:10:20.023 ' 00:10:20.023 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:20.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.023 --rc genhtml_branch_coverage=1 00:10:20.023 --rc genhtml_function_coverage=1 00:10:20.023 --rc genhtml_legend=1 00:10:20.023 --rc geninfo_all_blocks=1 00:10:20.023 --rc geninfo_unexecuted_blocks=1 00:10:20.023 00:10:20.023 ' 00:10:20.023 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:20.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.023 --rc genhtml_branch_coverage=1 00:10:20.023 --rc genhtml_function_coverage=1 00:10:20.023 --rc genhtml_legend=1 00:10:20.023 --rc geninfo_all_blocks=1 00:10:20.023 --rc geninfo_unexecuted_blocks=1 00:10:20.023 00:10:20.023 ' 00:10:20.023 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:20.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.023 --rc genhtml_branch_coverage=1 00:10:20.023 --rc genhtml_function_coverage=1 00:10:20.023 --rc genhtml_legend=1 00:10:20.023 --rc geninfo_all_blocks=1 00:10:20.023 --rc geninfo_unexecuted_blocks=1 00:10:20.023 00:10:20.023 ' 00:10:20.023 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:20.023 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:20.283 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # nvmf_veth_init 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:20.283 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:20.284 Cannot find device "nvmf_init_br" 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:20.284 Cannot find device "nvmf_init_br2" 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:20.284 Cannot find device "nvmf_tgt_br" 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:20.284 Cannot find device "nvmf_tgt_br2" 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:20.284 Cannot find device "nvmf_init_br" 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:20.284 Cannot find device "nvmf_init_br2" 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:20.284 Cannot find device "nvmf_tgt_br" 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:20.284 Cannot find device "nvmf_tgt_br2" 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:20.284 Cannot find device "nvmf_br" 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:20.284 Cannot find device "nvmf_init_if" 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:20.284 Cannot find device "nvmf_init_if2" 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:20.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:20.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:20.284 08:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:20.284 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:20.544 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:20.544 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:20.544 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:20.544 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:20.544 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:20.544 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:20.544 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:20.545 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:20.545 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:10:20.545 00:10:20.545 --- 10.0.0.3 ping statistics --- 00:10:20.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.545 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:20.545 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:20.545 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:10:20.545 00:10:20.545 --- 10.0.0.4 ping statistics --- 00:10:20.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.545 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:20.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:10:20.545 00:10:20.545 --- 10.0.0.1 ping statistics --- 00:10:20.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.545 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:20.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:10:20.545 00:10:20.545 --- 10.0.0.2 ping statistics --- 00:10:20.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.545 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # return 0 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=67355 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 67355 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 67355 ']' 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:20.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:20.545 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.804 [2024-10-15 08:21:22.275430] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:10:20.804 [2024-10-15 08:21:22.275535] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.804 [2024-10-15 08:21:22.424888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.804 [2024-10-15 08:21:22.513886] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.804 [2024-10-15 08:21:22.513955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.804 [2024-10-15 08:21:22.513970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.804 [2024-10-15 08:21:22.513981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.804 [2024-10-15 08:21:22.513999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.804 [2024-10-15 08:21:22.515834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:20.804 [2024-10-15 08:21:22.515899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:20.804 [2024-10-15 08:21:22.516029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:20.804 [2024-10-15 08:21:22.516039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.064 [2024-10-15 08:21:22.593025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.064 [2024-10-15 08:21:22.717154] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.064 Malloc0 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.064 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.064 [2024-10-15 08:21:22.790478] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:21.322 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.322 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:21.322 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:21.323 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:10:21.323 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:10:21.323 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:21.323 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:21.323 { 00:10:21.323 "params": { 00:10:21.323 "name": "Nvme$subsystem", 00:10:21.323 "trtype": "$TEST_TRANSPORT", 00:10:21.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:21.323 "adrfam": "ipv4", 00:10:21.323 "trsvcid": "$NVMF_PORT", 00:10:21.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:21.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:21.323 "hdgst": ${hdgst:-false}, 00:10:21.323 "ddgst": ${ddgst:-false} 00:10:21.323 }, 00:10:21.323 "method": "bdev_nvme_attach_controller" 00:10:21.323 } 00:10:21.323 EOF 00:10:21.323 )") 00:10:21.323 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:10:21.323 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:10:21.323 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:10:21.323 08:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:21.323 "params": { 00:10:21.323 "name": "Nvme1", 00:10:21.323 "trtype": "tcp", 00:10:21.323 "traddr": "10.0.0.3", 00:10:21.323 "adrfam": "ipv4", 00:10:21.323 "trsvcid": "4420", 00:10:21.323 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:21.323 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:21.323 "hdgst": false, 00:10:21.323 "ddgst": false 00:10:21.323 }, 00:10:21.323 "method": "bdev_nvme_attach_controller" 00:10:21.323 }' 00:10:21.323 [2024-10-15 08:21:22.854173] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:10:21.323 [2024-10-15 08:21:22.854276] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67389 ] 00:10:21.323 [2024-10-15 08:21:23.002735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:21.581 [2024-10-15 08:21:23.096640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.581 [2024-10-15 08:21:23.096532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.581 [2024-10-15 08:21:23.096633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.581 [2024-10-15 08:21:23.180625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:21.840 I/O targets: 00:10:21.841 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:21.841 00:10:21.841 00:10:21.841 CUnit - A unit testing framework for C - Version 2.1-3 00:10:21.841 http://cunit.sourceforge.net/ 00:10:21.841 00:10:21.841 00:10:21.841 Suite: bdevio tests on: Nvme1n1 00:10:21.841 Test: blockdev write read block ...passed 00:10:21.841 Test: blockdev write zeroes read block ...passed 00:10:21.841 Test: blockdev write zeroes read no split ...passed 00:10:21.841 Test: blockdev write zeroes read split ...passed 00:10:21.841 Test: blockdev write zeroes read split partial ...passed 00:10:21.841 Test: blockdev reset ...[2024-10-15 08:21:23.343181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:21.841 [2024-10-15 08:21:23.343309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b5b040 (9): Bad file descriptor 00:10:21.841 passed 00:10:21.841 Test: blockdev write read 8 blocks ...[2024-10-15 08:21:23.356813] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:21.841 passed 00:10:21.841 Test: blockdev write read size > 128k ...passed 00:10:21.841 Test: blockdev write read invalid size ...passed 00:10:21.841 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:21.841 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:21.841 Test: blockdev write read max offset ...passed 00:10:21.841 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:21.841 Test: blockdev writev readv 8 blocks ...passed 00:10:21.841 Test: blockdev writev readv 30 x 1block ...passed 00:10:21.841 Test: blockdev writev readv block ...passed 00:10:21.841 Test: blockdev writev readv size > 128k ...passed 00:10:21.841 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:21.841 Test: blockdev comparev and writev ...[2024-10-15 08:21:23.364928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.841 [2024-10-15 08:21:23.364978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:21.841 [2024-10-15 08:21:23.365006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.841 [2024-10-15 08:21:23.365026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:21.841 [2024-10-15 08:21:23.365513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.841 [2024-10-15 08:21:23.365544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:21.841 [2024-10-15 08:21:23.365578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.841 [2024-10-15 08:21:23.365588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:21.841 [2024-10-15 08:21:23.365882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.841 [2024-10-15 08:21:23.365899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:21.841 [2024-10-15 08:21:23.365916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.841 [2024-10-15 08:21:23.365926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:21.841 [2024-10-15 08:21:23.366253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.841 [2024-10-15 08:21:23.366271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:21.841 [2024-10-15 08:21:23.366287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.841 [2024-10-15 08:21:23.366298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:21.841 passed 00:10:21.841 Test: blockdev nvme passthru rw ...passed 00:10:21.841 Test: blockdev nvme passthru vendor specific ...passed 00:10:21.841 Test: blockdev nvme admin passthru ...[2024-10-15 08:21:23.367105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:21.841 [2024-10-15 08:21:23.367142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:21.841 [2024-10-15 08:21:23.367261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:21.841 [2024-10-15 08:21:23.367277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:21.841 [2024-10-15 08:21:23.367388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:21.841 [2024-10-15 08:21:23.367403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:21.841 [2024-10-15 08:21:23.367513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:21.841 [2024-10-15 08:21:23.367528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:21.841 passed 00:10:21.841 Test: blockdev copy ...passed 00:10:21.841 00:10:21.841 Run Summary: Type Total Ran Passed Failed Inactive 00:10:21.841 suites 1 1 n/a 0 0 00:10:21.841 tests 23 23 23 0 0 00:10:21.841 asserts 152 152 152 0 n/a 00:10:21.841 00:10:21.841 Elapsed time = 0.152 seconds 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:22.100 rmmod nvme_tcp 00:10:22.100 rmmod nvme_fabrics 00:10:22.100 rmmod nvme_keyring 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 67355 ']' 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 67355 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 67355 ']' 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 67355 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67355 00:10:22.100 killing process with pid 67355 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67355' 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 67355 00:10:22.100 08:21:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 67355 00:10:22.359 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:22.359 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:22.359 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:22.359 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:22.359 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:22.359 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:10:22.359 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:10:22.359 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:22.359 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:22.359 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:22.684 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:22.684 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:22.684 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:22.684 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:22.684 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:22.684 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:22.684 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:22.684 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:22.684 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:22.684 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:22.684 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:22.684 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:22.684 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:22.684 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.684 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.684 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.684 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:10:22.684 00:10:22.684 real 0m2.731s 00:10:22.684 user 0m7.582s 00:10:22.684 sys 0m0.983s 00:10:22.684 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.684 08:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.684 ************************************ 00:10:22.684 END TEST nvmf_bdevio 00:10:22.684 ************************************ 00:10:22.684 08:21:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:22.684 ************************************ 00:10:22.684 END TEST nvmf_target_core 00:10:22.684 ************************************ 00:10:22.684 00:10:22.684 real 2m40.020s 00:10:22.684 user 6m56.091s 00:10:22.684 sys 0m55.577s 00:10:22.684 08:21:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.684 08:21:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:22.684 08:21:24 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:22.685 08:21:24 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:22.685 08:21:24 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.685 08:21:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:22.685 ************************************ 00:10:22.685 START TEST nvmf_target_extra 00:10:22.685 ************************************ 00:10:22.685 08:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:22.943 * Looking for test storage... 00:10:22.943 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:22.943 08:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:22.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.944 --rc genhtml_branch_coverage=1 00:10:22.944 --rc genhtml_function_coverage=1 00:10:22.944 --rc genhtml_legend=1 00:10:22.944 --rc geninfo_all_blocks=1 00:10:22.944 --rc geninfo_unexecuted_blocks=1 00:10:22.944 00:10:22.944 ' 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:22.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.944 --rc genhtml_branch_coverage=1 00:10:22.944 --rc genhtml_function_coverage=1 00:10:22.944 --rc genhtml_legend=1 00:10:22.944 --rc geninfo_all_blocks=1 00:10:22.944 --rc geninfo_unexecuted_blocks=1 00:10:22.944 00:10:22.944 ' 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:22.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.944 --rc genhtml_branch_coverage=1 00:10:22.944 --rc genhtml_function_coverage=1 00:10:22.944 --rc genhtml_legend=1 00:10:22.944 --rc geninfo_all_blocks=1 00:10:22.944 --rc geninfo_unexecuted_blocks=1 00:10:22.944 00:10:22.944 ' 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:22.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.944 --rc genhtml_branch_coverage=1 00:10:22.944 --rc genhtml_function_coverage=1 00:10:22.944 --rc genhtml_legend=1 00:10:22.944 --rc geninfo_all_blocks=1 00:10:22.944 --rc geninfo_unexecuted_blocks=1 00:10:22.944 00:10:22.944 ' 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:22.944 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:22.944 ************************************ 00:10:22.944 START TEST nvmf_auth_target 00:10:22.944 ************************************ 00:10:22.944 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:23.206 * Looking for test storage... 00:10:23.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:23.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.206 --rc genhtml_branch_coverage=1 00:10:23.206 --rc genhtml_function_coverage=1 00:10:23.206 --rc genhtml_legend=1 00:10:23.206 --rc geninfo_all_blocks=1 00:10:23.206 --rc geninfo_unexecuted_blocks=1 00:10:23.206 00:10:23.206 ' 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:23.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.206 --rc genhtml_branch_coverage=1 00:10:23.206 --rc genhtml_function_coverage=1 00:10:23.206 --rc genhtml_legend=1 00:10:23.206 --rc geninfo_all_blocks=1 00:10:23.206 --rc geninfo_unexecuted_blocks=1 00:10:23.206 00:10:23.206 ' 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:23.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.206 --rc genhtml_branch_coverage=1 00:10:23.206 --rc genhtml_function_coverage=1 00:10:23.206 --rc genhtml_legend=1 00:10:23.206 --rc geninfo_all_blocks=1 00:10:23.206 --rc geninfo_unexecuted_blocks=1 00:10:23.206 00:10:23.206 ' 00:10:23.206 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:23.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.206 --rc genhtml_branch_coverage=1 00:10:23.207 --rc genhtml_function_coverage=1 00:10:23.207 --rc genhtml_legend=1 00:10:23.207 --rc geninfo_all_blocks=1 00:10:23.207 --rc geninfo_unexecuted_blocks=1 00:10:23.207 00:10:23.207 ' 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:23.207 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # nvmf_veth_init 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:23.207 Cannot find device "nvmf_init_br" 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:23.207 Cannot find device "nvmf_init_br2" 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:23.207 Cannot find device "nvmf_tgt_br" 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:23.207 Cannot find device "nvmf_tgt_br2" 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:23.207 Cannot find device "nvmf_init_br" 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:23.207 Cannot find device "nvmf_init_br2" 00:10:23.207 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:10:23.208 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:23.208 Cannot find device "nvmf_tgt_br" 00:10:23.208 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:10:23.208 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:23.208 Cannot find device "nvmf_tgt_br2" 00:10:23.208 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:10:23.208 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:23.466 Cannot find device "nvmf_br" 00:10:23.466 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:10:23.466 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:23.466 Cannot find device "nvmf_init_if" 00:10:23.466 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:10:23.466 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:23.466 Cannot find device "nvmf_init_if2" 00:10:23.466 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:10:23.466 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:23.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:23.466 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:10:23.466 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:23.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:23.466 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:10:23.466 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:23.466 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:23.466 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:23.466 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:23.466 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:23.466 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:23.466 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:23.466 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:23.466 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:23.466 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:23.466 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:23.466 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:23.466 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:23.466 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:23.466 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:23.466 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:23.466 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:23.466 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:23.466 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:23.466 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:23.466 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:23.466 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:23.466 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:23.466 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:23.726 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:23.726 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:23.726 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:23.726 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:23.726 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:23.726 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:23.726 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:23.726 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:23.726 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:23.726 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:23.726 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:10:23.726 00:10:23.726 --- 10.0.0.3 ping statistics --- 00:10:23.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.726 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:10:23.726 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:23.726 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:23.726 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:10:23.726 00:10:23.726 --- 10.0.0.4 ping statistics --- 00:10:23.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.726 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:10:23.726 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:23.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:23.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:10:23.726 00:10:23.726 --- 10.0.0.1 ping statistics --- 00:10:23.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.726 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:10:23.726 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:23.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:23.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:10:23.726 00:10:23.726 --- 10.0.0.2 ping statistics --- 00:10:23.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.726 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:23.726 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:23.726 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # return 0 00:10:23.726 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:23.726 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:23.726 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:23.727 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:23.727 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:23.727 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:23.727 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:23.727 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:10:23.727 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:23.727 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:23.727 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.727 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=67675 00:10:23.727 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 67675 00:10:23.727 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67675 ']' 00:10:23.727 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:23.727 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.727 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:23.727 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.727 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:23.727 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67713 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=8edfd3c42174564d40d3fb55bd7ba3dfee2c572e6847bf35 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.76F 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 8edfd3c42174564d40d3fb55bd7ba3dfee2c572e6847bf35 0 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 8edfd3c42174564d40d3fb55bd7ba3dfee2c572e6847bf35 0 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=8edfd3c42174564d40d3fb55bd7ba3dfee2c572e6847bf35 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.76F 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.76F 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.76F 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=ec89fed958d63bf18da8a99699ee6722a64ae2962d696b9856a2cb34ec0c88be 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.PB9 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key ec89fed958d63bf18da8a99699ee6722a64ae2962d696b9856a2cb34ec0c88be 3 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 ec89fed958d63bf18da8a99699ee6722a64ae2962d696b9856a2cb34ec0c88be 3 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=ec89fed958d63bf18da8a99699ee6722a64ae2962d696b9856a2cb34ec0c88be 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.PB9 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.PB9 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.PB9 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=5fe793518466bc70e0b50ac534cba144 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.faq 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 5fe793518466bc70e0b50ac534cba144 1 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 5fe793518466bc70e0b50ac534cba144 1 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=5fe793518466bc70e0b50ac534cba144 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.faq 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.faq 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.faq 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=f0ef44dbbcd3a5a877382195d13f19286a58262f003efabb 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.6Ox 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key f0ef44dbbcd3a5a877382195d13f19286a58262f003efabb 2 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 f0ef44dbbcd3a5a877382195d13f19286a58262f003efabb 2 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:10:25.102 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=f0ef44dbbcd3a5a877382195d13f19286a58262f003efabb 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.6Ox 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.6Ox 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.6Ox 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=da70a373ee6947e4e977e2d36bd1733d066a4f27c2039309 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.6Ev 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key da70a373ee6947e4e977e2d36bd1733d066a4f27c2039309 2 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 da70a373ee6947e4e977e2d36bd1733d066a4f27c2039309 2 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=da70a373ee6947e4e977e2d36bd1733d066a4f27c2039309 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.6Ev 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.6Ev 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.6Ev 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=58da5019e90c35d10e8591d4b4f565a4 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.XLg 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 58da5019e90c35d10e8591d4b4f565a4 1 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 58da5019e90c35d10e8591d4b4f565a4 1 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=58da5019e90c35d10e8591d4b4f565a4 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:10:25.103 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:10:25.361 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.XLg 00:10:25.361 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.XLg 00:10:25.361 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.XLg 00:10:25.361 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:10:25.361 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:10:25.361 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:25.361 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:10:25.361 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:10:25.361 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:10:25.361 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:25.361 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=5fcb4bceaa4ac0e1a0c8c9171b9b5b44b4267b58eaee67cb2e0342b9a3436f30 00:10:25.361 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:10:25.361 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.86p 00:10:25.361 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 5fcb4bceaa4ac0e1a0c8c9171b9b5b44b4267b58eaee67cb2e0342b9a3436f30 3 00:10:25.361 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 5fcb4bceaa4ac0e1a0c8c9171b9b5b44b4267b58eaee67cb2e0342b9a3436f30 3 00:10:25.361 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:10:25.361 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:10:25.362 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=5fcb4bceaa4ac0e1a0c8c9171b9b5b44b4267b58eaee67cb2e0342b9a3436f30 00:10:25.362 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:10:25.362 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:10:25.362 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.86p 00:10:25.362 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.86p 00:10:25.362 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.86p 00:10:25.362 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:10:25.362 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67675 00:10:25.362 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67675 ']' 00:10:25.362 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.362 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:25.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.362 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.362 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:25.362 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.620 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.620 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:25.620 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67713 /var/tmp/host.sock 00:10:25.620 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 67713 ']' 00:10:25.620 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:10:25.620 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:25.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:25.620 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:25.620 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:25.620 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.878 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.878 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:25.878 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:10:25.878 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.878 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.878 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.878 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:25.878 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.76F 00:10:25.878 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.878 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.878 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.878 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.76F 00:10:25.878 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.76F 00:10:26.136 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.PB9 ]] 00:10:26.136 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PB9 00:10:26.136 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.136 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.136 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.136 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PB9 00:10:26.136 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PB9 00:10:26.712 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:26.712 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.faq 00:10:26.712 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.712 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.712 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.712 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.faq 00:10:26.712 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.faq 00:10:26.970 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.6Ox ]] 00:10:26.970 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6Ox 00:10:26.970 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.970 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.970 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.970 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6Ox 00:10:26.970 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6Ox 00:10:27.228 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:27.228 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.6Ev 00:10:27.228 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.228 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.228 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.229 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.6Ev 00:10:27.229 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.6Ev 00:10:27.487 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.XLg ]] 00:10:27.487 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XLg 00:10:27.487 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.487 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.487 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.487 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XLg 00:10:27.487 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XLg 00:10:28.054 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:28.054 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.86p 00:10:28.054 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.054 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.054 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.054 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.86p 00:10:28.054 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.86p 00:10:28.312 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:10:28.312 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:28.312 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:28.312 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:28.312 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:28.312 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:28.570 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:10:28.570 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:28.570 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:28.570 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:28.570 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:28.570 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.570 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:28.570 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.570 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.570 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.570 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:28.570 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:28.570 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:28.827 00:10:28.827 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:28.827 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.827 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:29.434 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:29.434 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:29.434 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.434 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.434 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.434 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:29.434 { 00:10:29.434 "cntlid": 1, 00:10:29.434 "qid": 0, 00:10:29.434 "state": "enabled", 00:10:29.434 "thread": "nvmf_tgt_poll_group_000", 00:10:29.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:10:29.434 "listen_address": { 00:10:29.434 "trtype": "TCP", 00:10:29.434 "adrfam": "IPv4", 00:10:29.434 "traddr": "10.0.0.3", 00:10:29.434 "trsvcid": "4420" 00:10:29.434 }, 00:10:29.434 "peer_address": { 00:10:29.434 "trtype": "TCP", 00:10:29.434 "adrfam": "IPv4", 00:10:29.434 "traddr": "10.0.0.1", 00:10:29.434 "trsvcid": "36656" 00:10:29.434 }, 00:10:29.434 "auth": { 00:10:29.434 "state": "completed", 00:10:29.434 "digest": "sha256", 00:10:29.434 "dhgroup": "null" 00:10:29.434 } 00:10:29.434 } 00:10:29.434 ]' 00:10:29.434 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:29.434 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:29.434 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:29.434 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:29.434 08:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:29.434 08:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:29.434 08:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:29.434 08:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:29.693 08:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:10:29.693 08:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:10:35.073 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.073 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:10:35.073 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.073 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.073 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.073 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:35.073 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:35.073 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:35.073 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:10:35.073 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:35.073 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:35.073 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:35.073 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:35.073 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.073 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:35.073 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.073 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.073 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.073 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:35.073 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:35.073 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:35.073 00:10:35.073 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:35.073 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:35.073 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:35.331 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:35.331 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:35.331 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.331 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.331 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.331 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:35.331 { 00:10:35.331 "cntlid": 3, 00:10:35.331 "qid": 0, 00:10:35.331 "state": "enabled", 00:10:35.331 "thread": "nvmf_tgt_poll_group_000", 00:10:35.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:10:35.331 "listen_address": { 00:10:35.331 "trtype": "TCP", 00:10:35.331 "adrfam": "IPv4", 00:10:35.331 "traddr": "10.0.0.3", 00:10:35.331 "trsvcid": "4420" 00:10:35.331 }, 00:10:35.331 "peer_address": { 00:10:35.331 "trtype": "TCP", 00:10:35.331 "adrfam": "IPv4", 00:10:35.331 "traddr": "10.0.0.1", 00:10:35.331 "trsvcid": "36694" 00:10:35.331 }, 00:10:35.331 "auth": { 00:10:35.331 "state": "completed", 00:10:35.331 "digest": "sha256", 00:10:35.331 "dhgroup": "null" 00:10:35.331 } 00:10:35.331 } 00:10:35.331 ]' 00:10:35.331 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:35.331 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:35.331 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:35.331 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:35.331 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:35.331 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.331 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.331 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.589 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:10:35.589 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:10:36.525 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.525 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:10:36.525 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.525 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.525 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.525 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:36.525 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:36.525 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:36.815 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:10:36.815 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:36.815 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:36.815 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:36.815 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:36.815 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:36.815 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.815 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.815 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.815 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.815 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.815 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.815 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:37.073 00:10:37.073 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:37.073 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:37.073 08:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.334 08:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.334 08:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.334 08:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.334 08:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.334 08:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.334 08:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:37.334 { 00:10:37.334 "cntlid": 5, 00:10:37.334 "qid": 0, 00:10:37.334 "state": "enabled", 00:10:37.334 "thread": "nvmf_tgt_poll_group_000", 00:10:37.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:10:37.334 "listen_address": { 00:10:37.334 "trtype": "TCP", 00:10:37.334 "adrfam": "IPv4", 00:10:37.334 "traddr": "10.0.0.3", 00:10:37.334 "trsvcid": "4420" 00:10:37.334 }, 00:10:37.334 "peer_address": { 00:10:37.334 "trtype": "TCP", 00:10:37.334 "adrfam": "IPv4", 00:10:37.334 "traddr": "10.0.0.1", 00:10:37.334 "trsvcid": "53378" 00:10:37.334 }, 00:10:37.334 "auth": { 00:10:37.334 "state": "completed", 00:10:37.334 "digest": "sha256", 00:10:37.334 "dhgroup": "null" 00:10:37.334 } 00:10:37.334 } 00:10:37.334 ]' 00:10:37.334 08:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:37.594 08:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:37.594 08:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:37.594 08:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:37.594 08:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:37.594 08:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.594 08:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.594 08:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.852 08:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:10:37.852 08:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:10:38.787 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.787 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:10:38.787 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.787 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.787 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.787 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:38.787 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:38.787 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:39.045 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:10:39.045 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:39.045 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:39.045 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:39.045 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:39.045 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:39.045 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key3 00:10:39.045 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.045 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.045 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.045 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:39.045 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:39.045 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:39.303 00:10:39.303 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:39.303 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:39.303 08:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.869 08:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.869 08:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.869 08:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.869 08:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.869 08:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.869 08:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:39.869 { 00:10:39.869 "cntlid": 7, 00:10:39.869 "qid": 0, 00:10:39.869 "state": "enabled", 00:10:39.869 "thread": "nvmf_tgt_poll_group_000", 00:10:39.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:10:39.869 "listen_address": { 00:10:39.869 "trtype": "TCP", 00:10:39.869 "adrfam": "IPv4", 00:10:39.869 "traddr": "10.0.0.3", 00:10:39.869 "trsvcid": "4420" 00:10:39.869 }, 00:10:39.869 "peer_address": { 00:10:39.869 "trtype": "TCP", 00:10:39.869 "adrfam": "IPv4", 00:10:39.869 "traddr": "10.0.0.1", 00:10:39.869 "trsvcid": "53412" 00:10:39.869 }, 00:10:39.869 "auth": { 00:10:39.869 "state": "completed", 00:10:39.869 "digest": "sha256", 00:10:39.869 "dhgroup": "null" 00:10:39.869 } 00:10:39.869 } 00:10:39.869 ]' 00:10:39.869 08:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:39.869 08:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:39.869 08:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:39.869 08:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:39.869 08:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:39.869 08:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.869 08:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.869 08:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.127 08:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:10:40.127 08:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:10:41.062 08:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:41.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:41.062 08:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:10:41.062 08:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.062 08:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.062 08:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.062 08:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:41.062 08:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:41.062 08:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:41.062 08:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:41.320 08:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:10:41.320 08:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:41.320 08:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:41.320 08:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:41.320 08:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:41.320 08:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:41.320 08:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:41.320 08:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.320 08:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.320 08:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.320 08:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:41.320 08:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:41.320 08:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:41.578 00:10:41.578 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:41.578 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.578 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:41.836 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.836 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.836 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.836 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.836 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.836 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:41.836 { 00:10:41.836 "cntlid": 9, 00:10:41.836 "qid": 0, 00:10:41.836 "state": "enabled", 00:10:41.836 "thread": "nvmf_tgt_poll_group_000", 00:10:41.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:10:41.836 "listen_address": { 00:10:41.836 "trtype": "TCP", 00:10:41.836 "adrfam": "IPv4", 00:10:41.836 "traddr": "10.0.0.3", 00:10:41.836 "trsvcid": "4420" 00:10:41.836 }, 00:10:41.836 "peer_address": { 00:10:41.836 "trtype": "TCP", 00:10:41.836 "adrfam": "IPv4", 00:10:41.836 "traddr": "10.0.0.1", 00:10:41.836 "trsvcid": "53440" 00:10:41.836 }, 00:10:41.837 "auth": { 00:10:41.837 "state": "completed", 00:10:41.837 "digest": "sha256", 00:10:41.837 "dhgroup": "ffdhe2048" 00:10:41.837 } 00:10:41.837 } 00:10:41.837 ]' 00:10:41.837 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:42.095 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:42.095 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:42.095 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:42.095 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:42.095 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:42.095 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:42.095 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:42.352 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:10:42.352 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:10:43.000 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.000 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:10:43.000 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.000 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.000 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.000 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:43.000 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:43.000 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:43.263 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:10:43.263 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:43.263 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:43.263 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:43.263 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:43.263 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:43.263 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:43.263 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.263 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.263 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.263 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:43.263 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:43.263 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:43.830 00:10:43.830 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:43.830 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:43.830 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.088 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.088 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.088 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.088 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.088 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.088 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:44.088 { 00:10:44.088 "cntlid": 11, 00:10:44.088 "qid": 0, 00:10:44.088 "state": "enabled", 00:10:44.088 "thread": "nvmf_tgt_poll_group_000", 00:10:44.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:10:44.088 "listen_address": { 00:10:44.088 "trtype": "TCP", 00:10:44.088 "adrfam": "IPv4", 00:10:44.088 "traddr": "10.0.0.3", 00:10:44.088 "trsvcid": "4420" 00:10:44.088 }, 00:10:44.088 "peer_address": { 00:10:44.088 "trtype": "TCP", 00:10:44.088 "adrfam": "IPv4", 00:10:44.088 "traddr": "10.0.0.1", 00:10:44.088 "trsvcid": "53462" 00:10:44.088 }, 00:10:44.088 "auth": { 00:10:44.088 "state": "completed", 00:10:44.088 "digest": "sha256", 00:10:44.088 "dhgroup": "ffdhe2048" 00:10:44.088 } 00:10:44.088 } 00:10:44.088 ]' 00:10:44.088 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:44.088 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:44.088 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:44.088 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:44.088 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:44.088 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:44.088 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:44.088 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:44.654 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:10:44.654 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:10:45.219 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.219 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:10:45.219 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.220 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.220 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.220 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:45.220 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:45.220 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:45.478 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:10:45.478 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:45.478 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:45.478 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:45.478 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:45.478 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:45.478 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:45.478 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.478 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.478 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.478 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:45.478 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:45.478 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:45.736 00:10:45.736 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:45.736 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:45.736 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.994 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.994 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.994 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.994 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.994 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.994 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:45.994 { 00:10:45.994 "cntlid": 13, 00:10:45.994 "qid": 0, 00:10:45.994 "state": "enabled", 00:10:45.994 "thread": "nvmf_tgt_poll_group_000", 00:10:45.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:10:45.994 "listen_address": { 00:10:45.994 "trtype": "TCP", 00:10:45.994 "adrfam": "IPv4", 00:10:45.994 "traddr": "10.0.0.3", 00:10:45.994 "trsvcid": "4420" 00:10:45.994 }, 00:10:45.994 "peer_address": { 00:10:45.994 "trtype": "TCP", 00:10:45.994 "adrfam": "IPv4", 00:10:45.994 "traddr": "10.0.0.1", 00:10:45.994 "trsvcid": "53502" 00:10:45.994 }, 00:10:45.994 "auth": { 00:10:45.994 "state": "completed", 00:10:45.994 "digest": "sha256", 00:10:45.994 "dhgroup": "ffdhe2048" 00:10:45.994 } 00:10:45.994 } 00:10:45.994 ]' 00:10:45.994 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:46.253 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:46.253 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:46.253 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:46.253 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:46.253 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.253 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.253 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:46.510 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:10:46.510 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:10:47.443 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:47.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:47.443 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:10:47.443 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.443 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.443 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.443 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:47.443 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:47.443 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:47.701 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:10:47.701 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:47.701 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:47.701 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:47.701 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:47.701 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.701 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key3 00:10:47.701 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.701 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.701 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.701 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:47.701 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:47.701 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:47.959 00:10:47.959 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:47.959 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.959 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:48.217 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.217 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.217 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.217 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.217 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.217 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:48.217 { 00:10:48.217 "cntlid": 15, 00:10:48.217 "qid": 0, 00:10:48.217 "state": "enabled", 00:10:48.217 "thread": "nvmf_tgt_poll_group_000", 00:10:48.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:10:48.217 "listen_address": { 00:10:48.217 "trtype": "TCP", 00:10:48.217 "adrfam": "IPv4", 00:10:48.217 "traddr": "10.0.0.3", 00:10:48.217 "trsvcid": "4420" 00:10:48.217 }, 00:10:48.217 "peer_address": { 00:10:48.217 "trtype": "TCP", 00:10:48.217 "adrfam": "IPv4", 00:10:48.217 "traddr": "10.0.0.1", 00:10:48.217 "trsvcid": "40238" 00:10:48.217 }, 00:10:48.217 "auth": { 00:10:48.217 "state": "completed", 00:10:48.217 "digest": "sha256", 00:10:48.217 "dhgroup": "ffdhe2048" 00:10:48.217 } 00:10:48.217 } 00:10:48.217 ]' 00:10:48.217 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:48.217 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:48.217 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:48.474 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:48.475 08:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:48.475 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.475 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.475 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.732 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:10:48.732 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:10:49.666 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:49.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:49.666 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:10:49.666 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.666 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.666 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.666 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:49.666 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:49.666 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:49.666 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:49.926 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:10:49.926 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:49.926 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:49.926 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:49.926 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:49.926 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.926 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:49.926 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.926 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.926 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.926 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:49.926 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:49.926 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:50.185 00:10:50.185 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:50.185 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.185 08:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:50.481 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:50.481 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:50.481 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.481 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.481 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.481 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:50.481 { 00:10:50.481 "cntlid": 17, 00:10:50.481 "qid": 0, 00:10:50.481 "state": "enabled", 00:10:50.481 "thread": "nvmf_tgt_poll_group_000", 00:10:50.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:10:50.481 "listen_address": { 00:10:50.481 "trtype": "TCP", 00:10:50.481 "adrfam": "IPv4", 00:10:50.481 "traddr": "10.0.0.3", 00:10:50.481 "trsvcid": "4420" 00:10:50.481 }, 00:10:50.481 "peer_address": { 00:10:50.481 "trtype": "TCP", 00:10:50.481 "adrfam": "IPv4", 00:10:50.481 "traddr": "10.0.0.1", 00:10:50.481 "trsvcid": "40258" 00:10:50.481 }, 00:10:50.481 "auth": { 00:10:50.481 "state": "completed", 00:10:50.481 "digest": "sha256", 00:10:50.481 "dhgroup": "ffdhe3072" 00:10:50.481 } 00:10:50.481 } 00:10:50.481 ]' 00:10:50.481 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:50.741 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:50.741 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:50.741 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:50.741 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:50.741 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:50.741 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:50.741 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.000 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:10:51.000 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:10:51.936 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:51.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:51.936 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:10:51.936 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.936 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.936 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.936 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:51.936 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:51.936 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:52.195 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:10:52.195 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:52.195 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:52.195 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:52.195 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:52.195 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.195 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:52.195 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.195 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.195 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.195 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:52.195 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:52.195 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:52.454 00:10:52.454 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:52.454 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.454 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:53.020 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.020 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.020 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.020 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.020 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.020 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:53.020 { 00:10:53.020 "cntlid": 19, 00:10:53.020 "qid": 0, 00:10:53.020 "state": "enabled", 00:10:53.020 "thread": "nvmf_tgt_poll_group_000", 00:10:53.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:10:53.020 "listen_address": { 00:10:53.020 "trtype": "TCP", 00:10:53.020 "adrfam": "IPv4", 00:10:53.020 "traddr": "10.0.0.3", 00:10:53.020 "trsvcid": "4420" 00:10:53.020 }, 00:10:53.020 "peer_address": { 00:10:53.020 "trtype": "TCP", 00:10:53.020 "adrfam": "IPv4", 00:10:53.020 "traddr": "10.0.0.1", 00:10:53.020 "trsvcid": "40284" 00:10:53.020 }, 00:10:53.020 "auth": { 00:10:53.020 "state": "completed", 00:10:53.020 "digest": "sha256", 00:10:53.020 "dhgroup": "ffdhe3072" 00:10:53.020 } 00:10:53.020 } 00:10:53.020 ]' 00:10:53.020 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:53.020 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:53.020 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:53.020 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:53.020 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:53.020 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.020 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.020 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.277 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:10:53.277 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:10:54.212 08:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.212 08:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:10:54.212 08:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.212 08:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.212 08:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.212 08:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:54.212 08:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:54.212 08:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:54.212 08:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:10:54.212 08:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:54.212 08:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:54.212 08:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:54.212 08:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:54.212 08:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.212 08:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:54.212 08:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.212 08:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.470 08:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.470 08:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:54.470 08:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:54.470 08:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:54.729 00:10:54.729 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:54.729 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:54.729 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.988 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.988 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.988 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.988 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.988 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.988 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:54.988 { 00:10:54.988 "cntlid": 21, 00:10:54.988 "qid": 0, 00:10:54.988 "state": "enabled", 00:10:54.988 "thread": "nvmf_tgt_poll_group_000", 00:10:54.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:10:54.988 "listen_address": { 00:10:54.988 "trtype": "TCP", 00:10:54.988 "adrfam": "IPv4", 00:10:54.988 "traddr": "10.0.0.3", 00:10:54.988 "trsvcid": "4420" 00:10:54.988 }, 00:10:54.988 "peer_address": { 00:10:54.988 "trtype": "TCP", 00:10:54.988 "adrfam": "IPv4", 00:10:54.988 "traddr": "10.0.0.1", 00:10:54.988 "trsvcid": "40316" 00:10:54.988 }, 00:10:54.988 "auth": { 00:10:54.988 "state": "completed", 00:10:54.988 "digest": "sha256", 00:10:54.988 "dhgroup": "ffdhe3072" 00:10:54.988 } 00:10:54.988 } 00:10:54.988 ]' 00:10:54.988 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:55.247 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:55.247 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:55.247 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:55.247 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:55.247 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.247 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.247 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.505 08:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:10:55.505 08:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:10:56.071 08:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.071 08:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:10:56.071 08:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.071 08:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.071 08:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.071 08:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:56.071 08:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:56.072 08:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:56.642 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:10:56.642 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:56.642 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:56.642 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:56.642 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:56.642 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.642 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key3 00:10:56.642 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.642 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.642 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.642 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:56.642 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:56.642 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:56.901 00:10:56.901 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:56.901 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:56.901 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.160 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.160 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.160 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.160 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.160 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.160 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:57.160 { 00:10:57.160 "cntlid": 23, 00:10:57.160 "qid": 0, 00:10:57.160 "state": "enabled", 00:10:57.160 "thread": "nvmf_tgt_poll_group_000", 00:10:57.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:10:57.160 "listen_address": { 00:10:57.160 "trtype": "TCP", 00:10:57.160 "adrfam": "IPv4", 00:10:57.160 "traddr": "10.0.0.3", 00:10:57.160 "trsvcid": "4420" 00:10:57.160 }, 00:10:57.160 "peer_address": { 00:10:57.160 "trtype": "TCP", 00:10:57.160 "adrfam": "IPv4", 00:10:57.160 "traddr": "10.0.0.1", 00:10:57.160 "trsvcid": "54870" 00:10:57.160 }, 00:10:57.160 "auth": { 00:10:57.160 "state": "completed", 00:10:57.160 "digest": "sha256", 00:10:57.160 "dhgroup": "ffdhe3072" 00:10:57.160 } 00:10:57.160 } 00:10:57.160 ]' 00:10:57.160 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:57.160 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:57.160 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:57.419 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:57.419 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:57.419 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.419 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.419 08:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.678 08:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:10:57.678 08:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:10:58.263 08:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.263 08:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:10:58.263 08:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.263 08:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.263 08:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.263 08:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:58.263 08:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:58.263 08:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:58.263 08:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:58.522 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:10:58.522 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:58.522 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:58.522 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:58.522 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:58.522 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.522 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:58.522 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.522 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.522 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.522 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:58.522 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:58.522 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:59.090 00:10:59.090 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:59.090 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:59.090 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:59.350 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.350 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.350 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.350 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.350 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.350 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:59.350 { 00:10:59.350 "cntlid": 25, 00:10:59.350 "qid": 0, 00:10:59.350 "state": "enabled", 00:10:59.350 "thread": "nvmf_tgt_poll_group_000", 00:10:59.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:10:59.350 "listen_address": { 00:10:59.350 "trtype": "TCP", 00:10:59.350 "adrfam": "IPv4", 00:10:59.350 "traddr": "10.0.0.3", 00:10:59.350 "trsvcid": "4420" 00:10:59.350 }, 00:10:59.350 "peer_address": { 00:10:59.350 "trtype": "TCP", 00:10:59.350 "adrfam": "IPv4", 00:10:59.350 "traddr": "10.0.0.1", 00:10:59.350 "trsvcid": "54884" 00:10:59.350 }, 00:10:59.350 "auth": { 00:10:59.350 "state": "completed", 00:10:59.350 "digest": "sha256", 00:10:59.350 "dhgroup": "ffdhe4096" 00:10:59.350 } 00:10:59.350 } 00:10:59.350 ]' 00:10:59.350 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:59.350 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:59.350 08:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:59.350 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:59.350 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:59.350 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.350 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.350 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.918 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:10:59.918 08:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:11:00.487 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.487 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:00.487 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.487 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.487 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.487 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:00.487 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:00.487 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:00.747 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:11:00.747 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:00.747 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:00.747 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:00.747 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:00.747 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.747 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:00.747 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.747 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.007 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.007 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.007 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.007 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:01.269 00:11:01.269 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:01.269 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:01.269 08:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.836 08:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.836 08:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.836 08:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.836 08:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.836 08:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.836 08:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:01.836 { 00:11:01.836 "cntlid": 27, 00:11:01.836 "qid": 0, 00:11:01.836 "state": "enabled", 00:11:01.836 "thread": "nvmf_tgt_poll_group_000", 00:11:01.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:01.836 "listen_address": { 00:11:01.836 "trtype": "TCP", 00:11:01.836 "adrfam": "IPv4", 00:11:01.836 "traddr": "10.0.0.3", 00:11:01.836 "trsvcid": "4420" 00:11:01.836 }, 00:11:01.836 "peer_address": { 00:11:01.836 "trtype": "TCP", 00:11:01.836 "adrfam": "IPv4", 00:11:01.836 "traddr": "10.0.0.1", 00:11:01.836 "trsvcid": "54896" 00:11:01.836 }, 00:11:01.836 "auth": { 00:11:01.836 "state": "completed", 00:11:01.836 "digest": "sha256", 00:11:01.836 "dhgroup": "ffdhe4096" 00:11:01.836 } 00:11:01.836 } 00:11:01.836 ]' 00:11:01.836 08:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:01.837 08:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:01.837 08:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:01.837 08:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:01.837 08:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:01.837 08:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.837 08:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.837 08:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.095 08:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:11:02.095 08:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:11:02.661 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.919 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:02.919 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.919 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.919 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.919 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:02.919 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:02.919 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:03.177 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:11:03.178 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:03.178 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:03.178 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:03.178 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:03.178 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.178 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.178 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.178 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.178 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.178 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.178 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.178 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.436 00:11:03.436 08:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:03.436 08:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:03.436 08:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.694 08:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.694 08:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.694 08:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.694 08:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.694 08:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.694 08:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:03.694 { 00:11:03.694 "cntlid": 29, 00:11:03.694 "qid": 0, 00:11:03.694 "state": "enabled", 00:11:03.694 "thread": "nvmf_tgt_poll_group_000", 00:11:03.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:03.694 "listen_address": { 00:11:03.694 "trtype": "TCP", 00:11:03.694 "adrfam": "IPv4", 00:11:03.694 "traddr": "10.0.0.3", 00:11:03.694 "trsvcid": "4420" 00:11:03.694 }, 00:11:03.694 "peer_address": { 00:11:03.694 "trtype": "TCP", 00:11:03.694 "adrfam": "IPv4", 00:11:03.694 "traddr": "10.0.0.1", 00:11:03.694 "trsvcid": "54914" 00:11:03.694 }, 00:11:03.694 "auth": { 00:11:03.694 "state": "completed", 00:11:03.694 "digest": "sha256", 00:11:03.694 "dhgroup": "ffdhe4096" 00:11:03.694 } 00:11:03.694 } 00:11:03.694 ]' 00:11:03.694 08:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:03.952 08:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:03.952 08:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:03.952 08:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:03.952 08:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:03.952 08:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.952 08:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.952 08:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.211 08:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:11:04.212 08:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:11:04.779 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.779 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:04.779 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.779 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.779 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.779 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:04.779 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:04.779 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:05.345 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:11:05.345 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:05.345 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:05.345 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:05.345 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:05.345 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.345 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key3 00:11:05.345 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.345 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.345 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.345 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:05.345 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:05.345 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:05.604 00:11:05.604 08:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:05.604 08:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.604 08:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:05.863 08:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.863 08:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.863 08:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.863 08:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.863 08:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.863 08:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:05.863 { 00:11:05.863 "cntlid": 31, 00:11:05.863 "qid": 0, 00:11:05.863 "state": "enabled", 00:11:05.863 "thread": "nvmf_tgt_poll_group_000", 00:11:05.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:05.863 "listen_address": { 00:11:05.863 "trtype": "TCP", 00:11:05.863 "adrfam": "IPv4", 00:11:05.863 "traddr": "10.0.0.3", 00:11:05.863 "trsvcid": "4420" 00:11:05.863 }, 00:11:05.863 "peer_address": { 00:11:05.863 "trtype": "TCP", 00:11:05.863 "adrfam": "IPv4", 00:11:05.863 "traddr": "10.0.0.1", 00:11:05.863 "trsvcid": "54944" 00:11:05.863 }, 00:11:05.863 "auth": { 00:11:05.863 "state": "completed", 00:11:05.863 "digest": "sha256", 00:11:05.863 "dhgroup": "ffdhe4096" 00:11:05.863 } 00:11:05.863 } 00:11:05.863 ]' 00:11:05.863 08:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:06.121 08:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:06.121 08:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:06.121 08:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:06.121 08:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:06.121 08:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.121 08:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.121 08:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.380 08:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:11:06.380 08:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:11:07.317 08:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.317 08:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:07.317 08:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.317 08:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.317 08:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.317 08:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:07.317 08:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:07.317 08:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:07.317 08:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:07.576 08:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:11:07.576 08:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:07.576 08:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:07.576 08:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:07.576 08:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:07.576 08:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.576 08:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.576 08:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.576 08:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.576 08:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.576 08:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.576 08:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.576 08:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.145 00:11:08.145 08:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:08.145 08:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:08.145 08:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.404 08:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.404 08:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.404 08:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.404 08:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.404 08:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.404 08:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:08.404 { 00:11:08.404 "cntlid": 33, 00:11:08.404 "qid": 0, 00:11:08.404 "state": "enabled", 00:11:08.404 "thread": "nvmf_tgt_poll_group_000", 00:11:08.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:08.404 "listen_address": { 00:11:08.404 "trtype": "TCP", 00:11:08.404 "adrfam": "IPv4", 00:11:08.404 "traddr": "10.0.0.3", 00:11:08.404 "trsvcid": "4420" 00:11:08.404 }, 00:11:08.404 "peer_address": { 00:11:08.404 "trtype": "TCP", 00:11:08.404 "adrfam": "IPv4", 00:11:08.404 "traddr": "10.0.0.1", 00:11:08.404 "trsvcid": "41826" 00:11:08.404 }, 00:11:08.404 "auth": { 00:11:08.404 "state": "completed", 00:11:08.404 "digest": "sha256", 00:11:08.404 "dhgroup": "ffdhe6144" 00:11:08.404 } 00:11:08.404 } 00:11:08.404 ]' 00:11:08.404 08:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:08.404 08:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:08.404 08:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:08.404 08:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:08.404 08:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:08.663 08:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.663 08:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.663 08:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.922 08:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:11:08.922 08:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:11:09.490 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.490 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:09.490 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.490 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.490 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.490 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:09.490 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:09.490 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:09.749 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:11:09.749 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:09.749 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:09.749 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:09.749 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:09.749 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.749 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.749 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.749 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.749 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.749 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.749 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:09.749 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.328 00:11:10.328 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:10.328 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.328 08:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:10.587 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.587 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.587 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.587 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.587 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.587 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:10.587 { 00:11:10.587 "cntlid": 35, 00:11:10.587 "qid": 0, 00:11:10.587 "state": "enabled", 00:11:10.587 "thread": "nvmf_tgt_poll_group_000", 00:11:10.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:10.587 "listen_address": { 00:11:10.587 "trtype": "TCP", 00:11:10.587 "adrfam": "IPv4", 00:11:10.587 "traddr": "10.0.0.3", 00:11:10.587 "trsvcid": "4420" 00:11:10.587 }, 00:11:10.587 "peer_address": { 00:11:10.587 "trtype": "TCP", 00:11:10.587 "adrfam": "IPv4", 00:11:10.587 "traddr": "10.0.0.1", 00:11:10.587 "trsvcid": "41870" 00:11:10.587 }, 00:11:10.587 "auth": { 00:11:10.587 "state": "completed", 00:11:10.587 "digest": "sha256", 00:11:10.587 "dhgroup": "ffdhe6144" 00:11:10.587 } 00:11:10.587 } 00:11:10.587 ]' 00:11:10.587 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:10.587 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:10.587 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:10.587 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:10.587 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:10.845 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.845 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.845 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.103 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:11:11.103 08:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:11:11.671 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.671 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:11.671 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.671 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.671 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.671 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:11.671 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:11.671 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:11.930 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:11:11.930 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:11.930 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:11.930 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:11.930 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:11.930 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.930 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:11.930 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.930 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.189 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.189 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.189 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.190 08:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.448 00:11:12.707 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.707 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.707 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.965 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.965 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.965 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.965 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.965 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.965 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:12.965 { 00:11:12.965 "cntlid": 37, 00:11:12.965 "qid": 0, 00:11:12.965 "state": "enabled", 00:11:12.965 "thread": "nvmf_tgt_poll_group_000", 00:11:12.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:12.965 "listen_address": { 00:11:12.965 "trtype": "TCP", 00:11:12.965 "adrfam": "IPv4", 00:11:12.965 "traddr": "10.0.0.3", 00:11:12.965 "trsvcid": "4420" 00:11:12.965 }, 00:11:12.965 "peer_address": { 00:11:12.965 "trtype": "TCP", 00:11:12.965 "adrfam": "IPv4", 00:11:12.965 "traddr": "10.0.0.1", 00:11:12.965 "trsvcid": "41894" 00:11:12.965 }, 00:11:12.965 "auth": { 00:11:12.965 "state": "completed", 00:11:12.965 "digest": "sha256", 00:11:12.965 "dhgroup": "ffdhe6144" 00:11:12.965 } 00:11:12.965 } 00:11:12.965 ]' 00:11:12.965 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:12.965 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:12.965 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:12.965 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:12.965 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:12.965 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.965 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.965 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.570 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:11:13.570 08:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:11:14.136 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.136 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:14.136 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.137 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.137 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.137 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:14.137 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:14.137 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:14.395 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:11:14.395 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:14.395 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:14.395 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:14.395 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:14.395 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.395 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key3 00:11:14.395 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.395 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.395 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.395 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:14.395 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:14.395 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:14.962 00:11:14.962 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:14.962 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:14.962 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.221 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.221 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.221 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.221 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.221 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.221 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:15.221 { 00:11:15.221 "cntlid": 39, 00:11:15.221 "qid": 0, 00:11:15.221 "state": "enabled", 00:11:15.221 "thread": "nvmf_tgt_poll_group_000", 00:11:15.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:15.221 "listen_address": { 00:11:15.221 "trtype": "TCP", 00:11:15.221 "adrfam": "IPv4", 00:11:15.221 "traddr": "10.0.0.3", 00:11:15.221 "trsvcid": "4420" 00:11:15.221 }, 00:11:15.221 "peer_address": { 00:11:15.221 "trtype": "TCP", 00:11:15.221 "adrfam": "IPv4", 00:11:15.221 "traddr": "10.0.0.1", 00:11:15.221 "trsvcid": "41920" 00:11:15.221 }, 00:11:15.221 "auth": { 00:11:15.221 "state": "completed", 00:11:15.221 "digest": "sha256", 00:11:15.221 "dhgroup": "ffdhe6144" 00:11:15.221 } 00:11:15.221 } 00:11:15.221 ]' 00:11:15.221 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:15.221 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:15.221 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:15.221 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:15.221 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:15.480 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.480 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.480 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.739 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:11:15.739 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:11:16.676 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.677 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:16.677 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.677 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.677 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.677 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:16.677 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:16.677 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:16.677 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:16.677 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:11:16.677 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:16.677 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:16.677 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:16.677 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:16.677 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.677 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.677 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.677 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.677 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.677 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.677 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.677 08:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.613 00:11:17.613 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:17.613 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:17.613 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.873 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.873 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.873 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.873 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.873 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.873 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:17.873 { 00:11:17.873 "cntlid": 41, 00:11:17.873 "qid": 0, 00:11:17.873 "state": "enabled", 00:11:17.873 "thread": "nvmf_tgt_poll_group_000", 00:11:17.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:17.873 "listen_address": { 00:11:17.873 "trtype": "TCP", 00:11:17.873 "adrfam": "IPv4", 00:11:17.873 "traddr": "10.0.0.3", 00:11:17.873 "trsvcid": "4420" 00:11:17.873 }, 00:11:17.873 "peer_address": { 00:11:17.873 "trtype": "TCP", 00:11:17.873 "adrfam": "IPv4", 00:11:17.873 "traddr": "10.0.0.1", 00:11:17.873 "trsvcid": "41696" 00:11:17.873 }, 00:11:17.873 "auth": { 00:11:17.873 "state": "completed", 00:11:17.873 "digest": "sha256", 00:11:17.873 "dhgroup": "ffdhe8192" 00:11:17.873 } 00:11:17.873 } 00:11:17.873 ]' 00:11:17.873 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:17.873 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:17.873 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:17.873 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:17.873 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:17.873 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.873 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.873 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.132 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:11:18.132 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:11:19.068 08:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.068 08:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:19.068 08:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.068 08:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.068 08:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.068 08:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:19.068 08:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:19.068 08:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:19.327 08:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:11:19.327 08:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:19.327 08:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:19.327 08:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:19.327 08:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:19.327 08:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.327 08:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.327 08:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.327 08:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.327 08:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.327 08:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.327 08:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.327 08:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.895 00:11:19.895 08:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:19.895 08:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:19.895 08:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.154 08:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.154 08:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.154 08:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.154 08:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.154 08:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.154 08:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:20.154 { 00:11:20.154 "cntlid": 43, 00:11:20.154 "qid": 0, 00:11:20.154 "state": "enabled", 00:11:20.154 "thread": "nvmf_tgt_poll_group_000", 00:11:20.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:20.154 "listen_address": { 00:11:20.154 "trtype": "TCP", 00:11:20.154 "adrfam": "IPv4", 00:11:20.154 "traddr": "10.0.0.3", 00:11:20.154 "trsvcid": "4420" 00:11:20.154 }, 00:11:20.154 "peer_address": { 00:11:20.154 "trtype": "TCP", 00:11:20.154 "adrfam": "IPv4", 00:11:20.154 "traddr": "10.0.0.1", 00:11:20.154 "trsvcid": "41718" 00:11:20.154 }, 00:11:20.154 "auth": { 00:11:20.154 "state": "completed", 00:11:20.154 "digest": "sha256", 00:11:20.154 "dhgroup": "ffdhe8192" 00:11:20.154 } 00:11:20.154 } 00:11:20.154 ]' 00:11:20.154 08:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:20.412 08:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:20.412 08:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:20.412 08:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:20.412 08:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:20.412 08:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.412 08:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.412 08:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.728 08:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:11:20.728 08:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:11:21.677 08:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.677 08:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:21.677 08:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.677 08:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.677 08:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.677 08:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:21.677 08:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:21.677 08:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:21.936 08:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:11:21.936 08:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:21.936 08:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:21.936 08:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:21.936 08:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:21.936 08:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.936 08:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.936 08:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.936 08:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.936 08:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.936 08:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.936 08:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.936 08:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.503 00:11:22.503 08:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:22.503 08:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:22.503 08:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.761 08:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.761 08:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.761 08:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.761 08:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.761 08:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.761 08:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:22.761 { 00:11:22.761 "cntlid": 45, 00:11:22.761 "qid": 0, 00:11:22.761 "state": "enabled", 00:11:22.761 "thread": "nvmf_tgt_poll_group_000", 00:11:22.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:22.761 "listen_address": { 00:11:22.761 "trtype": "TCP", 00:11:22.761 "adrfam": "IPv4", 00:11:22.761 "traddr": "10.0.0.3", 00:11:22.761 "trsvcid": "4420" 00:11:22.761 }, 00:11:22.761 "peer_address": { 00:11:22.761 "trtype": "TCP", 00:11:22.761 "adrfam": "IPv4", 00:11:22.761 "traddr": "10.0.0.1", 00:11:22.761 "trsvcid": "41738" 00:11:22.761 }, 00:11:22.761 "auth": { 00:11:22.761 "state": "completed", 00:11:22.761 "digest": "sha256", 00:11:22.761 "dhgroup": "ffdhe8192" 00:11:22.761 } 00:11:22.761 } 00:11:22.761 ]' 00:11:22.761 08:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:22.761 08:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:22.761 08:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:23.019 08:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:23.020 08:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:23.020 08:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.020 08:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.020 08:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.278 08:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:11:23.278 08:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:11:23.846 08:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.846 08:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:23.846 08:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.846 08:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.846 08:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.846 08:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:23.846 08:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:23.846 08:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:24.415 08:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:11:24.415 08:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:24.415 08:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:24.415 08:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:24.415 08:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:24.415 08:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.415 08:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key3 00:11:24.415 08:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.415 08:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.415 08:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.415 08:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:24.415 08:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:24.415 08:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:24.987 00:11:24.987 08:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:24.987 08:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.987 08:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.244 08:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.244 08:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.244 08:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.244 08:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.244 08:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.244 08:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:25.244 { 00:11:25.244 "cntlid": 47, 00:11:25.244 "qid": 0, 00:11:25.244 "state": "enabled", 00:11:25.244 "thread": "nvmf_tgt_poll_group_000", 00:11:25.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:25.244 "listen_address": { 00:11:25.244 "trtype": "TCP", 00:11:25.244 "adrfam": "IPv4", 00:11:25.244 "traddr": "10.0.0.3", 00:11:25.244 "trsvcid": "4420" 00:11:25.244 }, 00:11:25.244 "peer_address": { 00:11:25.244 "trtype": "TCP", 00:11:25.244 "adrfam": "IPv4", 00:11:25.244 "traddr": "10.0.0.1", 00:11:25.244 "trsvcid": "41780" 00:11:25.244 }, 00:11:25.244 "auth": { 00:11:25.244 "state": "completed", 00:11:25.244 "digest": "sha256", 00:11:25.244 "dhgroup": "ffdhe8192" 00:11:25.244 } 00:11:25.244 } 00:11:25.244 ]' 00:11:25.244 08:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:25.244 08:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:25.244 08:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:25.244 08:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:25.244 08:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:25.502 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.502 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.502 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.760 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:11:25.760 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:11:26.326 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.326 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:26.326 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.326 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.326 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.326 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:26.326 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:26.326 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:26.326 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:26.326 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:26.892 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:11:26.892 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:26.892 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:26.892 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:26.892 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:26.892 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.892 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.892 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.892 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.892 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.892 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.892 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.892 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.151 00:11:27.151 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:27.151 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.151 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:27.412 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.412 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.412 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.412 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.412 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.412 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:27.412 { 00:11:27.412 "cntlid": 49, 00:11:27.412 "qid": 0, 00:11:27.412 "state": "enabled", 00:11:27.412 "thread": "nvmf_tgt_poll_group_000", 00:11:27.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:27.412 "listen_address": { 00:11:27.412 "trtype": "TCP", 00:11:27.412 "adrfam": "IPv4", 00:11:27.412 "traddr": "10.0.0.3", 00:11:27.412 "trsvcid": "4420" 00:11:27.412 }, 00:11:27.412 "peer_address": { 00:11:27.412 "trtype": "TCP", 00:11:27.412 "adrfam": "IPv4", 00:11:27.412 "traddr": "10.0.0.1", 00:11:27.412 "trsvcid": "46336" 00:11:27.412 }, 00:11:27.412 "auth": { 00:11:27.412 "state": "completed", 00:11:27.412 "digest": "sha384", 00:11:27.412 "dhgroup": "null" 00:11:27.412 } 00:11:27.412 } 00:11:27.412 ]' 00:11:27.412 08:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:27.412 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:27.412 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.412 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:27.412 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.412 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.412 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.412 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.981 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:11:27.981 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:11:28.549 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.549 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:28.549 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.549 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.549 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.549 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:28.549 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:28.549 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:28.808 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:11:28.808 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:28.808 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:28.808 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:28.808 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:28.808 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.808 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.808 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.808 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.808 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.808 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.808 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.808 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.067 00:11:29.326 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:29.326 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.326 08:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:29.586 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.586 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.586 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.586 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.586 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.586 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.586 { 00:11:29.586 "cntlid": 51, 00:11:29.586 "qid": 0, 00:11:29.586 "state": "enabled", 00:11:29.586 "thread": "nvmf_tgt_poll_group_000", 00:11:29.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:29.586 "listen_address": { 00:11:29.586 "trtype": "TCP", 00:11:29.586 "adrfam": "IPv4", 00:11:29.586 "traddr": "10.0.0.3", 00:11:29.586 "trsvcid": "4420" 00:11:29.586 }, 00:11:29.586 "peer_address": { 00:11:29.586 "trtype": "TCP", 00:11:29.586 "adrfam": "IPv4", 00:11:29.586 "traddr": "10.0.0.1", 00:11:29.586 "trsvcid": "46360" 00:11:29.586 }, 00:11:29.586 "auth": { 00:11:29.586 "state": "completed", 00:11:29.586 "digest": "sha384", 00:11:29.586 "dhgroup": "null" 00:11:29.586 } 00:11:29.586 } 00:11:29.586 ]' 00:11:29.586 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.586 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:29.586 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.586 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:29.586 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.586 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.586 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.586 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.154 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:11:30.154 08:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:11:30.721 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.721 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:30.721 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.721 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.721 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.721 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.721 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:30.721 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:30.980 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:11:30.980 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.980 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:30.980 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:30.980 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:30.980 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.980 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.980 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.980 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.980 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.980 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.980 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.980 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.239 00:11:31.239 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:31.239 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.239 08:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:31.498 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.498 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.498 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.498 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.498 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.848 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:31.848 { 00:11:31.848 "cntlid": 53, 00:11:31.848 "qid": 0, 00:11:31.848 "state": "enabled", 00:11:31.848 "thread": "nvmf_tgt_poll_group_000", 00:11:31.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:31.848 "listen_address": { 00:11:31.848 "trtype": "TCP", 00:11:31.848 "adrfam": "IPv4", 00:11:31.848 "traddr": "10.0.0.3", 00:11:31.848 "trsvcid": "4420" 00:11:31.848 }, 00:11:31.848 "peer_address": { 00:11:31.848 "trtype": "TCP", 00:11:31.848 "adrfam": "IPv4", 00:11:31.848 "traddr": "10.0.0.1", 00:11:31.848 "trsvcid": "46390" 00:11:31.848 }, 00:11:31.848 "auth": { 00:11:31.848 "state": "completed", 00:11:31.848 "digest": "sha384", 00:11:31.848 "dhgroup": "null" 00:11:31.848 } 00:11:31.848 } 00:11:31.848 ]' 00:11:31.848 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:31.848 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:31.848 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.848 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:31.848 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:31.848 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.848 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.848 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.108 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:11:32.108 08:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:11:32.676 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.676 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:32.676 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.676 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.676 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.676 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.676 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:32.676 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:32.936 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:11:32.936 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.936 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:32.936 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:32.936 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:32.936 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.936 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key3 00:11:32.936 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.936 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.195 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.195 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:33.195 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:33.195 08:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:33.454 00:11:33.454 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:33.454 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.454 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.713 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.713 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.713 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.713 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.713 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.713 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.713 { 00:11:33.713 "cntlid": 55, 00:11:33.713 "qid": 0, 00:11:33.713 "state": "enabled", 00:11:33.713 "thread": "nvmf_tgt_poll_group_000", 00:11:33.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:33.713 "listen_address": { 00:11:33.713 "trtype": "TCP", 00:11:33.713 "adrfam": "IPv4", 00:11:33.713 "traddr": "10.0.0.3", 00:11:33.713 "trsvcid": "4420" 00:11:33.713 }, 00:11:33.713 "peer_address": { 00:11:33.713 "trtype": "TCP", 00:11:33.713 "adrfam": "IPv4", 00:11:33.713 "traddr": "10.0.0.1", 00:11:33.713 "trsvcid": "46420" 00:11:33.713 }, 00:11:33.713 "auth": { 00:11:33.713 "state": "completed", 00:11:33.713 "digest": "sha384", 00:11:33.713 "dhgroup": "null" 00:11:33.713 } 00:11:33.713 } 00:11:33.713 ]' 00:11:33.713 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.713 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:33.713 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.971 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:33.972 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:33.972 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.972 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.972 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.231 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:11:34.231 08:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:11:34.800 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.800 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:34.800 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.800 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.800 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.800 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:34.800 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:34.800 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:34.800 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:35.142 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:11:35.142 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:35.142 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:35.142 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:35.142 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:35.142 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.142 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.143 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.143 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.143 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.143 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.143 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.143 08:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.401 00:11:35.660 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:35.660 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.660 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.918 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.918 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.918 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.918 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.918 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.918 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:35.918 { 00:11:35.918 "cntlid": 57, 00:11:35.918 "qid": 0, 00:11:35.918 "state": "enabled", 00:11:35.918 "thread": "nvmf_tgt_poll_group_000", 00:11:35.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:35.918 "listen_address": { 00:11:35.918 "trtype": "TCP", 00:11:35.918 "adrfam": "IPv4", 00:11:35.918 "traddr": "10.0.0.3", 00:11:35.918 "trsvcid": "4420" 00:11:35.918 }, 00:11:35.918 "peer_address": { 00:11:35.918 "trtype": "TCP", 00:11:35.918 "adrfam": "IPv4", 00:11:35.918 "traddr": "10.0.0.1", 00:11:35.918 "trsvcid": "46434" 00:11:35.918 }, 00:11:35.918 "auth": { 00:11:35.918 "state": "completed", 00:11:35.918 "digest": "sha384", 00:11:35.918 "dhgroup": "ffdhe2048" 00:11:35.918 } 00:11:35.918 } 00:11:35.918 ]' 00:11:35.918 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:35.918 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:35.918 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:35.918 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:35.918 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.918 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.919 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.919 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.484 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:11:36.484 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:11:37.053 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.053 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:37.053 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.053 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.053 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.053 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:37.053 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:37.053 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:37.313 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:11:37.313 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.313 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:37.313 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:37.313 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:37.313 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.313 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.313 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.313 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.313 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.313 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.313 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.313 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.880 00:11:37.880 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:37.880 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.880 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.138 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.139 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.139 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.139 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.139 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.139 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.139 { 00:11:38.139 "cntlid": 59, 00:11:38.139 "qid": 0, 00:11:38.139 "state": "enabled", 00:11:38.139 "thread": "nvmf_tgt_poll_group_000", 00:11:38.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:38.139 "listen_address": { 00:11:38.139 "trtype": "TCP", 00:11:38.139 "adrfam": "IPv4", 00:11:38.139 "traddr": "10.0.0.3", 00:11:38.139 "trsvcid": "4420" 00:11:38.139 }, 00:11:38.139 "peer_address": { 00:11:38.139 "trtype": "TCP", 00:11:38.139 "adrfam": "IPv4", 00:11:38.139 "traddr": "10.0.0.1", 00:11:38.139 "trsvcid": "37514" 00:11:38.139 }, 00:11:38.139 "auth": { 00:11:38.139 "state": "completed", 00:11:38.139 "digest": "sha384", 00:11:38.139 "dhgroup": "ffdhe2048" 00:11:38.139 } 00:11:38.139 } 00:11:38.139 ]' 00:11:38.139 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.139 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:38.139 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.139 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:38.139 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.139 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.139 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.139 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.705 08:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:11:38.705 08:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:11:39.272 08:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.272 08:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:39.272 08:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.272 08:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.272 08:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.272 08:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.272 08:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:39.272 08:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:39.531 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:11:39.531 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.531 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:39.531 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:39.531 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:39.531 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.531 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.531 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.531 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.531 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.531 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.531 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.531 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.789 00:11:39.789 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:39.789 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:39.789 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.356 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.356 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.356 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.356 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.356 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.356 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.356 { 00:11:40.356 "cntlid": 61, 00:11:40.356 "qid": 0, 00:11:40.356 "state": "enabled", 00:11:40.356 "thread": "nvmf_tgt_poll_group_000", 00:11:40.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:40.356 "listen_address": { 00:11:40.356 "trtype": "TCP", 00:11:40.356 "adrfam": "IPv4", 00:11:40.356 "traddr": "10.0.0.3", 00:11:40.356 "trsvcid": "4420" 00:11:40.356 }, 00:11:40.356 "peer_address": { 00:11:40.356 "trtype": "TCP", 00:11:40.356 "adrfam": "IPv4", 00:11:40.356 "traddr": "10.0.0.1", 00:11:40.356 "trsvcid": "37542" 00:11:40.356 }, 00:11:40.356 "auth": { 00:11:40.356 "state": "completed", 00:11:40.356 "digest": "sha384", 00:11:40.356 "dhgroup": "ffdhe2048" 00:11:40.356 } 00:11:40.356 } 00:11:40.356 ]' 00:11:40.356 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.356 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:40.356 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.356 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:40.356 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.356 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.356 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.356 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.615 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:11:40.615 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:11:41.550 08:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.550 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:41.550 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.550 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.550 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.550 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.550 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:41.550 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:41.808 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:11:41.808 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.808 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:41.808 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:41.808 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:41.808 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.808 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key3 00:11:41.808 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.808 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.808 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.808 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:41.808 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:41.808 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:42.066 00:11:42.066 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:42.066 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.066 08:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.325 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.325 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.325 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.325 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.325 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.325 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:42.325 { 00:11:42.325 "cntlid": 63, 00:11:42.325 "qid": 0, 00:11:42.325 "state": "enabled", 00:11:42.325 "thread": "nvmf_tgt_poll_group_000", 00:11:42.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:42.325 "listen_address": { 00:11:42.325 "trtype": "TCP", 00:11:42.325 "adrfam": "IPv4", 00:11:42.325 "traddr": "10.0.0.3", 00:11:42.325 "trsvcid": "4420" 00:11:42.325 }, 00:11:42.325 "peer_address": { 00:11:42.325 "trtype": "TCP", 00:11:42.325 "adrfam": "IPv4", 00:11:42.325 "traddr": "10.0.0.1", 00:11:42.325 "trsvcid": "37570" 00:11:42.325 }, 00:11:42.325 "auth": { 00:11:42.325 "state": "completed", 00:11:42.325 "digest": "sha384", 00:11:42.325 "dhgroup": "ffdhe2048" 00:11:42.325 } 00:11:42.325 } 00:11:42.325 ]' 00:11:42.325 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.584 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:42.584 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.584 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:42.584 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.584 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.584 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.584 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.843 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:11:42.843 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:11:43.778 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.778 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:43.778 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.778 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.778 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.778 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:43.778 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.778 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:43.778 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:43.778 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:11:43.778 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.778 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:43.778 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:43.778 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:43.778 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.778 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.778 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.778 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.037 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.037 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.037 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.037 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:44.294 00:11:44.294 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.294 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.295 08:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.618 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.618 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.618 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.618 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.618 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.618 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.618 { 00:11:44.618 "cntlid": 65, 00:11:44.618 "qid": 0, 00:11:44.618 "state": "enabled", 00:11:44.618 "thread": "nvmf_tgt_poll_group_000", 00:11:44.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:44.618 "listen_address": { 00:11:44.618 "trtype": "TCP", 00:11:44.618 "adrfam": "IPv4", 00:11:44.618 "traddr": "10.0.0.3", 00:11:44.618 "trsvcid": "4420" 00:11:44.618 }, 00:11:44.618 "peer_address": { 00:11:44.618 "trtype": "TCP", 00:11:44.618 "adrfam": "IPv4", 00:11:44.618 "traddr": "10.0.0.1", 00:11:44.618 "trsvcid": "37600" 00:11:44.618 }, 00:11:44.618 "auth": { 00:11:44.618 "state": "completed", 00:11:44.618 "digest": "sha384", 00:11:44.618 "dhgroup": "ffdhe3072" 00:11:44.618 } 00:11:44.618 } 00:11:44.618 ]' 00:11:44.618 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.618 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:44.618 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.618 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:44.618 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.618 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.618 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.618 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.186 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:11:45.186 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:11:45.753 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.753 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:45.753 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.753 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.753 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.753 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.753 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:45.753 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:46.012 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:11:46.012 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:46.012 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:46.012 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:46.012 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:46.012 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.012 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.012 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.012 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.012 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.012 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.012 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.012 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:46.270 00:11:46.270 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.270 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.270 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.837 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.837 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.837 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.837 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.837 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.837 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.837 { 00:11:46.837 "cntlid": 67, 00:11:46.837 "qid": 0, 00:11:46.837 "state": "enabled", 00:11:46.837 "thread": "nvmf_tgt_poll_group_000", 00:11:46.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:46.837 "listen_address": { 00:11:46.837 "trtype": "TCP", 00:11:46.837 "adrfam": "IPv4", 00:11:46.837 "traddr": "10.0.0.3", 00:11:46.837 "trsvcid": "4420" 00:11:46.837 }, 00:11:46.837 "peer_address": { 00:11:46.837 "trtype": "TCP", 00:11:46.837 "adrfam": "IPv4", 00:11:46.837 "traddr": "10.0.0.1", 00:11:46.837 "trsvcid": "37632" 00:11:46.837 }, 00:11:46.837 "auth": { 00:11:46.837 "state": "completed", 00:11:46.837 "digest": "sha384", 00:11:46.837 "dhgroup": "ffdhe3072" 00:11:46.837 } 00:11:46.837 } 00:11:46.837 ]' 00:11:46.837 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.837 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:46.837 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.837 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:46.837 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.837 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.837 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.837 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.096 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:11:47.096 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:11:47.663 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.663 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:47.663 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.663 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.933 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.933 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.933 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:47.933 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:48.192 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:11:48.192 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:48.192 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:48.192 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:48.192 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:48.192 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.192 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.192 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.192 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.192 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.192 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.192 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.192 08:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:48.451 00:11:48.451 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.451 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.451 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.710 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.710 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.710 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.710 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.710 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.710 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.710 { 00:11:48.710 "cntlid": 69, 00:11:48.710 "qid": 0, 00:11:48.710 "state": "enabled", 00:11:48.710 "thread": "nvmf_tgt_poll_group_000", 00:11:48.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:48.710 "listen_address": { 00:11:48.710 "trtype": "TCP", 00:11:48.710 "adrfam": "IPv4", 00:11:48.710 "traddr": "10.0.0.3", 00:11:48.710 "trsvcid": "4420" 00:11:48.710 }, 00:11:48.710 "peer_address": { 00:11:48.710 "trtype": "TCP", 00:11:48.710 "adrfam": "IPv4", 00:11:48.710 "traddr": "10.0.0.1", 00:11:48.710 "trsvcid": "54378" 00:11:48.710 }, 00:11:48.710 "auth": { 00:11:48.710 "state": "completed", 00:11:48.710 "digest": "sha384", 00:11:48.710 "dhgroup": "ffdhe3072" 00:11:48.710 } 00:11:48.710 } 00:11:48.710 ]' 00:11:48.710 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.969 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:48.969 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.969 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:48.969 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.969 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.969 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.969 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.228 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:11:49.228 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:11:50.164 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.164 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:50.164 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.164 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.164 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.164 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:50.164 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:50.164 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:50.423 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:11:50.423 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:50.423 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:50.423 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:50.423 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:50.423 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.423 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key3 00:11:50.423 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.423 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.423 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.423 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:50.423 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:50.423 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:50.682 00:11:50.682 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.682 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.682 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.249 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.249 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.249 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.249 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.249 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.249 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:51.249 { 00:11:51.249 "cntlid": 71, 00:11:51.249 "qid": 0, 00:11:51.249 "state": "enabled", 00:11:51.249 "thread": "nvmf_tgt_poll_group_000", 00:11:51.249 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:51.249 "listen_address": { 00:11:51.249 "trtype": "TCP", 00:11:51.249 "adrfam": "IPv4", 00:11:51.249 "traddr": "10.0.0.3", 00:11:51.249 "trsvcid": "4420" 00:11:51.249 }, 00:11:51.249 "peer_address": { 00:11:51.249 "trtype": "TCP", 00:11:51.249 "adrfam": "IPv4", 00:11:51.249 "traddr": "10.0.0.1", 00:11:51.249 "trsvcid": "54412" 00:11:51.249 }, 00:11:51.249 "auth": { 00:11:51.249 "state": "completed", 00:11:51.249 "digest": "sha384", 00:11:51.249 "dhgroup": "ffdhe3072" 00:11:51.249 } 00:11:51.249 } 00:11:51.249 ]' 00:11:51.249 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:51.249 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:51.249 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:51.249 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:51.249 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:51.249 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.249 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.249 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.508 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:11:51.508 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:11:52.075 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.075 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:52.075 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.075 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.075 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.075 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:52.075 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.075 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:52.075 08:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:52.346 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:11:52.346 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.346 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:52.346 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:52.346 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:52.346 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.346 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.346 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.346 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.346 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.346 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.346 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.346 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.913 00:11:52.913 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.913 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:52.913 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.181 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.181 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.181 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.181 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.181 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.181 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:53.181 { 00:11:53.181 "cntlid": 73, 00:11:53.181 "qid": 0, 00:11:53.181 "state": "enabled", 00:11:53.181 "thread": "nvmf_tgt_poll_group_000", 00:11:53.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:53.181 "listen_address": { 00:11:53.181 "trtype": "TCP", 00:11:53.181 "adrfam": "IPv4", 00:11:53.181 "traddr": "10.0.0.3", 00:11:53.181 "trsvcid": "4420" 00:11:53.181 }, 00:11:53.181 "peer_address": { 00:11:53.181 "trtype": "TCP", 00:11:53.181 "adrfam": "IPv4", 00:11:53.181 "traddr": "10.0.0.1", 00:11:53.181 "trsvcid": "54446" 00:11:53.181 }, 00:11:53.181 "auth": { 00:11:53.181 "state": "completed", 00:11:53.181 "digest": "sha384", 00:11:53.181 "dhgroup": "ffdhe4096" 00:11:53.181 } 00:11:53.181 } 00:11:53.181 ]' 00:11:53.181 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:53.181 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:53.181 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:53.181 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:53.181 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:53.443 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.443 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.443 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.700 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:11:53.700 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:11:54.266 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.266 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:54.266 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.266 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.266 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.266 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.266 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:54.266 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:54.524 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:11:54.524 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:54.524 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:54.524 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:54.524 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:54.524 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.524 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.524 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.524 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.524 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.524 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.524 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.524 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:55.091 00:11:55.091 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:55.091 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.091 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:55.350 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.350 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.350 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.350 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.350 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.350 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:55.350 { 00:11:55.350 "cntlid": 75, 00:11:55.350 "qid": 0, 00:11:55.350 "state": "enabled", 00:11:55.350 "thread": "nvmf_tgt_poll_group_000", 00:11:55.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:55.350 "listen_address": { 00:11:55.350 "trtype": "TCP", 00:11:55.350 "adrfam": "IPv4", 00:11:55.350 "traddr": "10.0.0.3", 00:11:55.350 "trsvcid": "4420" 00:11:55.350 }, 00:11:55.350 "peer_address": { 00:11:55.350 "trtype": "TCP", 00:11:55.350 "adrfam": "IPv4", 00:11:55.350 "traddr": "10.0.0.1", 00:11:55.350 "trsvcid": "54474" 00:11:55.350 }, 00:11:55.350 "auth": { 00:11:55.350 "state": "completed", 00:11:55.350 "digest": "sha384", 00:11:55.350 "dhgroup": "ffdhe4096" 00:11:55.350 } 00:11:55.350 } 00:11:55.350 ]' 00:11:55.350 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.350 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:55.350 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:55.609 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:55.609 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.609 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.609 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.609 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.868 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:11:55.868 08:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:11:56.442 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.442 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:56.442 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.442 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.442 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.442 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:56.443 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:56.443 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:57.012 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:11:57.012 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:57.012 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:57.012 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:57.012 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:57.012 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.012 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.012 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.012 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.012 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.012 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.012 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.012 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:57.271 00:11:57.271 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:57.271 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.271 08:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.529 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.529 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.529 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.530 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.530 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.530 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.530 { 00:11:57.530 "cntlid": 77, 00:11:57.530 "qid": 0, 00:11:57.530 "state": "enabled", 00:11:57.530 "thread": "nvmf_tgt_poll_group_000", 00:11:57.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:57.530 "listen_address": { 00:11:57.530 "trtype": "TCP", 00:11:57.530 "adrfam": "IPv4", 00:11:57.530 "traddr": "10.0.0.3", 00:11:57.530 "trsvcid": "4420" 00:11:57.530 }, 00:11:57.530 "peer_address": { 00:11:57.530 "trtype": "TCP", 00:11:57.530 "adrfam": "IPv4", 00:11:57.530 "traddr": "10.0.0.1", 00:11:57.530 "trsvcid": "50688" 00:11:57.530 }, 00:11:57.530 "auth": { 00:11:57.530 "state": "completed", 00:11:57.530 "digest": "sha384", 00:11:57.530 "dhgroup": "ffdhe4096" 00:11:57.530 } 00:11:57.530 } 00:11:57.530 ]' 00:11:57.530 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.530 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:57.530 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.530 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:57.530 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.788 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.788 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.788 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.046 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:11:58.046 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:11:58.614 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.614 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:11:58.614 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.614 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.614 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.614 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:58.614 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:58.614 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:58.872 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:11:58.872 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.872 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:58.872 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:58.872 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:58.872 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.872 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key3 00:11:58.872 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.872 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.872 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.872 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:58.872 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:58.872 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:59.439 00:11:59.439 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.439 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.439 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:59.697 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.697 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.697 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.697 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.697 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.697 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.697 { 00:11:59.697 "cntlid": 79, 00:11:59.697 "qid": 0, 00:11:59.697 "state": "enabled", 00:11:59.697 "thread": "nvmf_tgt_poll_group_000", 00:11:59.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:11:59.697 "listen_address": { 00:11:59.697 "trtype": "TCP", 00:11:59.697 "adrfam": "IPv4", 00:11:59.697 "traddr": "10.0.0.3", 00:11:59.697 "trsvcid": "4420" 00:11:59.697 }, 00:11:59.697 "peer_address": { 00:11:59.698 "trtype": "TCP", 00:11:59.698 "adrfam": "IPv4", 00:11:59.698 "traddr": "10.0.0.1", 00:11:59.698 "trsvcid": "50722" 00:11:59.698 }, 00:11:59.698 "auth": { 00:11:59.698 "state": "completed", 00:11:59.698 "digest": "sha384", 00:11:59.698 "dhgroup": "ffdhe4096" 00:11:59.698 } 00:11:59.698 } 00:11:59.698 ]' 00:11:59.698 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:59.698 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:59.698 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.956 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:59.956 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.956 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.956 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.956 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.215 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:12:00.215 08:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:12:00.782 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.783 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:00.783 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.783 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.783 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.783 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:00.783 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:00.783 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:00.783 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:01.041 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:12:01.041 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:01.041 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:01.041 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:01.041 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:01.041 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.041 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.041 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.041 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.300 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.300 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.300 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.300 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:01.560 00:12:01.560 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:01.560 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:01.560 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.135 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.135 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.135 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.135 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.135 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.135 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:02.135 { 00:12:02.135 "cntlid": 81, 00:12:02.135 "qid": 0, 00:12:02.135 "state": "enabled", 00:12:02.135 "thread": "nvmf_tgt_poll_group_000", 00:12:02.135 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:02.135 "listen_address": { 00:12:02.135 "trtype": "TCP", 00:12:02.135 "adrfam": "IPv4", 00:12:02.135 "traddr": "10.0.0.3", 00:12:02.135 "trsvcid": "4420" 00:12:02.135 }, 00:12:02.135 "peer_address": { 00:12:02.135 "trtype": "TCP", 00:12:02.135 "adrfam": "IPv4", 00:12:02.135 "traddr": "10.0.0.1", 00:12:02.135 "trsvcid": "50756" 00:12:02.135 }, 00:12:02.135 "auth": { 00:12:02.135 "state": "completed", 00:12:02.135 "digest": "sha384", 00:12:02.135 "dhgroup": "ffdhe6144" 00:12:02.135 } 00:12:02.135 } 00:12:02.135 ]' 00:12:02.135 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.135 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:02.135 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.135 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:02.135 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:02.135 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.135 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.135 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.394 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:12:02.394 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:12:03.331 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.331 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:03.331 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.331 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.331 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.331 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:03.331 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:03.331 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:03.590 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:12:03.590 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:03.590 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:03.590 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:03.590 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:03.590 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.590 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.590 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.590 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.590 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.590 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.590 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.590 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:04.157 00:12:04.157 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.157 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.157 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:04.415 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.415 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.415 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.415 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.415 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.415 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:04.415 { 00:12:04.415 "cntlid": 83, 00:12:04.415 "qid": 0, 00:12:04.415 "state": "enabled", 00:12:04.415 "thread": "nvmf_tgt_poll_group_000", 00:12:04.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:04.415 "listen_address": { 00:12:04.415 "trtype": "TCP", 00:12:04.415 "adrfam": "IPv4", 00:12:04.415 "traddr": "10.0.0.3", 00:12:04.415 "trsvcid": "4420" 00:12:04.415 }, 00:12:04.415 "peer_address": { 00:12:04.415 "trtype": "TCP", 00:12:04.415 "adrfam": "IPv4", 00:12:04.415 "traddr": "10.0.0.1", 00:12:04.415 "trsvcid": "50780" 00:12:04.415 }, 00:12:04.415 "auth": { 00:12:04.415 "state": "completed", 00:12:04.415 "digest": "sha384", 00:12:04.415 "dhgroup": "ffdhe6144" 00:12:04.415 } 00:12:04.415 } 00:12:04.415 ]' 00:12:04.415 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:04.415 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:04.415 08:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:04.415 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:04.415 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:04.415 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.415 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.415 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.674 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:12:04.674 08:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:12:05.610 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.610 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:05.610 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.610 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.610 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.610 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.610 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:05.610 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:05.610 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:12:05.610 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.610 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:05.610 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:05.610 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:05.610 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.610 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.610 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.610 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.610 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.610 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.610 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.610 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:06.177 00:12:06.177 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:06.177 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:06.177 08:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.433 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.433 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.433 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.433 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.433 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.433 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.433 { 00:12:06.433 "cntlid": 85, 00:12:06.434 "qid": 0, 00:12:06.434 "state": "enabled", 00:12:06.434 "thread": "nvmf_tgt_poll_group_000", 00:12:06.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:06.434 "listen_address": { 00:12:06.434 "trtype": "TCP", 00:12:06.434 "adrfam": "IPv4", 00:12:06.434 "traddr": "10.0.0.3", 00:12:06.434 "trsvcid": "4420" 00:12:06.434 }, 00:12:06.434 "peer_address": { 00:12:06.434 "trtype": "TCP", 00:12:06.434 "adrfam": "IPv4", 00:12:06.434 "traddr": "10.0.0.1", 00:12:06.434 "trsvcid": "50792" 00:12:06.434 }, 00:12:06.434 "auth": { 00:12:06.434 "state": "completed", 00:12:06.434 "digest": "sha384", 00:12:06.434 "dhgroup": "ffdhe6144" 00:12:06.434 } 00:12:06.434 } 00:12:06.434 ]' 00:12:06.434 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.434 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:06.434 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.692 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:06.692 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.692 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.692 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.692 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.951 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:12:06.951 08:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:12:07.517 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.517 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:07.517 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.517 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.517 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.517 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.517 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:07.517 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:07.776 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:12:07.776 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.776 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:07.776 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:07.776 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:07.776 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.776 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key3 00:12:07.776 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.776 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.035 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.035 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:08.035 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:08.035 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:08.293 00:12:08.553 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.553 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.553 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.811 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.811 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.811 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.811 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.811 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.811 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.811 { 00:12:08.811 "cntlid": 87, 00:12:08.811 "qid": 0, 00:12:08.811 "state": "enabled", 00:12:08.811 "thread": "nvmf_tgt_poll_group_000", 00:12:08.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:08.811 "listen_address": { 00:12:08.811 "trtype": "TCP", 00:12:08.811 "adrfam": "IPv4", 00:12:08.811 "traddr": "10.0.0.3", 00:12:08.811 "trsvcid": "4420" 00:12:08.811 }, 00:12:08.811 "peer_address": { 00:12:08.811 "trtype": "TCP", 00:12:08.811 "adrfam": "IPv4", 00:12:08.811 "traddr": "10.0.0.1", 00:12:08.811 "trsvcid": "41428" 00:12:08.811 }, 00:12:08.811 "auth": { 00:12:08.811 "state": "completed", 00:12:08.811 "digest": "sha384", 00:12:08.811 "dhgroup": "ffdhe6144" 00:12:08.811 } 00:12:08.811 } 00:12:08.811 ]' 00:12:08.811 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.811 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:08.811 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.811 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:08.811 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.811 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.811 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.811 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.378 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:12:09.378 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:12:09.945 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.945 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:09.946 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.946 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.946 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.946 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:09.946 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.946 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:09.946 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:10.204 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:12:10.204 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:10.204 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:10.204 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:10.204 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:10.204 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.204 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.204 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.204 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.204 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.204 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.204 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.204 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.140 00:12:11.140 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:11.140 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.140 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:11.140 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.140 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.140 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.140 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.140 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.140 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:11.140 { 00:12:11.140 "cntlid": 89, 00:12:11.140 "qid": 0, 00:12:11.140 "state": "enabled", 00:12:11.140 "thread": "nvmf_tgt_poll_group_000", 00:12:11.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:11.140 "listen_address": { 00:12:11.140 "trtype": "TCP", 00:12:11.140 "adrfam": "IPv4", 00:12:11.140 "traddr": "10.0.0.3", 00:12:11.140 "trsvcid": "4420" 00:12:11.140 }, 00:12:11.140 "peer_address": { 00:12:11.140 "trtype": "TCP", 00:12:11.140 "adrfam": "IPv4", 00:12:11.140 "traddr": "10.0.0.1", 00:12:11.140 "trsvcid": "41456" 00:12:11.140 }, 00:12:11.140 "auth": { 00:12:11.140 "state": "completed", 00:12:11.140 "digest": "sha384", 00:12:11.140 "dhgroup": "ffdhe8192" 00:12:11.140 } 00:12:11.140 } 00:12:11.140 ]' 00:12:11.140 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:11.399 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:11.400 08:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:11.400 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:11.400 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:11.400 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.400 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.400 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.721 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:12:11.721 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:12:12.670 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.670 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:12.670 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.670 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.670 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.670 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:12.670 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:12.670 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:12.929 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:12:12.929 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:12.929 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:12.929 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:12.929 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:12.929 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.929 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.929 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.929 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.929 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.929 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.929 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.929 08:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:13.497 00:12:13.497 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:13.497 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.497 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:13.756 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.756 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.756 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.756 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.756 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.756 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:13.756 { 00:12:13.756 "cntlid": 91, 00:12:13.756 "qid": 0, 00:12:13.756 "state": "enabled", 00:12:13.756 "thread": "nvmf_tgt_poll_group_000", 00:12:13.756 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:13.756 "listen_address": { 00:12:13.756 "trtype": "TCP", 00:12:13.756 "adrfam": "IPv4", 00:12:13.756 "traddr": "10.0.0.3", 00:12:13.756 "trsvcid": "4420" 00:12:13.756 }, 00:12:13.756 "peer_address": { 00:12:13.756 "trtype": "TCP", 00:12:13.756 "adrfam": "IPv4", 00:12:13.756 "traddr": "10.0.0.1", 00:12:13.756 "trsvcid": "41464" 00:12:13.756 }, 00:12:13.756 "auth": { 00:12:13.756 "state": "completed", 00:12:13.756 "digest": "sha384", 00:12:13.756 "dhgroup": "ffdhe8192" 00:12:13.756 } 00:12:13.756 } 00:12:13.756 ]' 00:12:13.756 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:13.756 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:13.756 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.014 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:14.014 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.014 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.014 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.014 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.272 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:12:14.272 08:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:12:15.208 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.208 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:15.208 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.208 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.208 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.208 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.208 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:15.208 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:15.468 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:12:15.468 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.468 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:15.468 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:15.468 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:15.468 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.468 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.468 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.468 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.469 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.469 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.469 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:15.469 08:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.035 00:12:16.035 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.035 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.035 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.293 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.293 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.293 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.293 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.293 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.293 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.293 { 00:12:16.293 "cntlid": 93, 00:12:16.293 "qid": 0, 00:12:16.293 "state": "enabled", 00:12:16.293 "thread": "nvmf_tgt_poll_group_000", 00:12:16.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:16.293 "listen_address": { 00:12:16.293 "trtype": "TCP", 00:12:16.293 "adrfam": "IPv4", 00:12:16.293 "traddr": "10.0.0.3", 00:12:16.293 "trsvcid": "4420" 00:12:16.293 }, 00:12:16.293 "peer_address": { 00:12:16.293 "trtype": "TCP", 00:12:16.293 "adrfam": "IPv4", 00:12:16.293 "traddr": "10.0.0.1", 00:12:16.293 "trsvcid": "41512" 00:12:16.293 }, 00:12:16.293 "auth": { 00:12:16.293 "state": "completed", 00:12:16.293 "digest": "sha384", 00:12:16.293 "dhgroup": "ffdhe8192" 00:12:16.293 } 00:12:16.293 } 00:12:16.293 ]' 00:12:16.293 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.293 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:16.293 08:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.551 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:16.551 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.551 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.551 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.551 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.809 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:12:16.809 08:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:12:17.375 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.375 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:17.375 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.375 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.375 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.375 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.375 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:17.375 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:17.941 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:12:17.941 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.941 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:17.941 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:17.941 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:17.941 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.942 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key3 00:12:17.942 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.942 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.942 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.942 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:17.942 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:17.942 08:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:18.508 00:12:18.508 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.508 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.508 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.814 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.814 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.814 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.814 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.814 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.814 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.814 { 00:12:18.814 "cntlid": 95, 00:12:18.814 "qid": 0, 00:12:18.814 "state": "enabled", 00:12:18.814 "thread": "nvmf_tgt_poll_group_000", 00:12:18.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:18.814 "listen_address": { 00:12:18.814 "trtype": "TCP", 00:12:18.814 "adrfam": "IPv4", 00:12:18.814 "traddr": "10.0.0.3", 00:12:18.814 "trsvcid": "4420" 00:12:18.814 }, 00:12:18.814 "peer_address": { 00:12:18.814 "trtype": "TCP", 00:12:18.814 "adrfam": "IPv4", 00:12:18.814 "traddr": "10.0.0.1", 00:12:18.814 "trsvcid": "57726" 00:12:18.814 }, 00:12:18.814 "auth": { 00:12:18.814 "state": "completed", 00:12:18.814 "digest": "sha384", 00:12:18.814 "dhgroup": "ffdhe8192" 00:12:18.814 } 00:12:18.814 } 00:12:18.814 ]' 00:12:18.814 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.814 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:18.814 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.814 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:18.814 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.814 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.814 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.814 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.139 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:12:19.139 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:12:20.089 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.089 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:20.089 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.089 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.089 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.089 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:20.089 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:20.089 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.089 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:20.089 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:20.348 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:12:20.348 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.348 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:20.348 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:20.348 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:20.348 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.348 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.348 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.348 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.348 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.348 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.348 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.348 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.606 00:12:20.606 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:20.606 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:20.606 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.864 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.864 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.864 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.864 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.864 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.864 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.864 { 00:12:20.864 "cntlid": 97, 00:12:20.864 "qid": 0, 00:12:20.864 "state": "enabled", 00:12:20.864 "thread": "nvmf_tgt_poll_group_000", 00:12:20.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:20.864 "listen_address": { 00:12:20.864 "trtype": "TCP", 00:12:20.864 "adrfam": "IPv4", 00:12:20.864 "traddr": "10.0.0.3", 00:12:20.864 "trsvcid": "4420" 00:12:20.864 }, 00:12:20.864 "peer_address": { 00:12:20.864 "trtype": "TCP", 00:12:20.864 "adrfam": "IPv4", 00:12:20.864 "traddr": "10.0.0.1", 00:12:20.864 "trsvcid": "57768" 00:12:20.864 }, 00:12:20.864 "auth": { 00:12:20.864 "state": "completed", 00:12:20.864 "digest": "sha512", 00:12:20.864 "dhgroup": "null" 00:12:20.864 } 00:12:20.864 } 00:12:20.864 ]' 00:12:20.864 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.864 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:20.864 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.122 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:21.122 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.122 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.122 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.122 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.381 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:12:21.381 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:12:21.948 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.948 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:21.948 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.948 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.948 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.948 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:21.948 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:21.948 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:22.518 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:12:22.518 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.518 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:22.518 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:22.518 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:22.518 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.518 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.518 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.518 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.518 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.518 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.518 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.518 08:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.779 00:12:22.779 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:22.779 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.779 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.037 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.037 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.037 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.037 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.037 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.037 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.037 { 00:12:23.037 "cntlid": 99, 00:12:23.037 "qid": 0, 00:12:23.037 "state": "enabled", 00:12:23.038 "thread": "nvmf_tgt_poll_group_000", 00:12:23.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:23.038 "listen_address": { 00:12:23.038 "trtype": "TCP", 00:12:23.038 "adrfam": "IPv4", 00:12:23.038 "traddr": "10.0.0.3", 00:12:23.038 "trsvcid": "4420" 00:12:23.038 }, 00:12:23.038 "peer_address": { 00:12:23.038 "trtype": "TCP", 00:12:23.038 "adrfam": "IPv4", 00:12:23.038 "traddr": "10.0.0.1", 00:12:23.038 "trsvcid": "57790" 00:12:23.038 }, 00:12:23.038 "auth": { 00:12:23.038 "state": "completed", 00:12:23.038 "digest": "sha512", 00:12:23.038 "dhgroup": "null" 00:12:23.038 } 00:12:23.038 } 00:12:23.038 ]' 00:12:23.038 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.038 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:23.038 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.038 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:23.038 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.297 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.297 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.297 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.556 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:12:23.556 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:12:24.124 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.124 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:24.124 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.124 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.124 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.124 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:24.124 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:24.124 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:24.452 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:12:24.452 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:24.452 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:24.452 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:24.452 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:24.452 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.452 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.452 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.452 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.711 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.711 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.711 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.711 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.970 00:12:24.970 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.970 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:24.970 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.229 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.229 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.229 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.229 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.229 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.229 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:25.229 { 00:12:25.229 "cntlid": 101, 00:12:25.229 "qid": 0, 00:12:25.229 "state": "enabled", 00:12:25.229 "thread": "nvmf_tgt_poll_group_000", 00:12:25.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:25.229 "listen_address": { 00:12:25.229 "trtype": "TCP", 00:12:25.229 "adrfam": "IPv4", 00:12:25.229 "traddr": "10.0.0.3", 00:12:25.229 "trsvcid": "4420" 00:12:25.229 }, 00:12:25.229 "peer_address": { 00:12:25.229 "trtype": "TCP", 00:12:25.229 "adrfam": "IPv4", 00:12:25.229 "traddr": "10.0.0.1", 00:12:25.229 "trsvcid": "57814" 00:12:25.229 }, 00:12:25.229 "auth": { 00:12:25.229 "state": "completed", 00:12:25.229 "digest": "sha512", 00:12:25.229 "dhgroup": "null" 00:12:25.229 } 00:12:25.229 } 00:12:25.229 ]' 00:12:25.229 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.229 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:25.229 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.229 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:25.229 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:25.229 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.229 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.229 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.796 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:12:25.796 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:12:26.364 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.364 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:26.364 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.364 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.364 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.364 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.364 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:26.364 08:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:26.623 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:12:26.623 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:26.623 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:26.623 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:26.623 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:26.623 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.623 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key3 00:12:26.623 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.623 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.623 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.623 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:26.623 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:26.623 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:26.880 00:12:26.880 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:26.880 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:26.880 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.139 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.139 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.139 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.139 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.139 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.139 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:27.139 { 00:12:27.139 "cntlid": 103, 00:12:27.139 "qid": 0, 00:12:27.139 "state": "enabled", 00:12:27.139 "thread": "nvmf_tgt_poll_group_000", 00:12:27.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:27.139 "listen_address": { 00:12:27.139 "trtype": "TCP", 00:12:27.139 "adrfam": "IPv4", 00:12:27.139 "traddr": "10.0.0.3", 00:12:27.139 "trsvcid": "4420" 00:12:27.139 }, 00:12:27.139 "peer_address": { 00:12:27.139 "trtype": "TCP", 00:12:27.139 "adrfam": "IPv4", 00:12:27.139 "traddr": "10.0.0.1", 00:12:27.139 "trsvcid": "38162" 00:12:27.139 }, 00:12:27.139 "auth": { 00:12:27.139 "state": "completed", 00:12:27.139 "digest": "sha512", 00:12:27.139 "dhgroup": "null" 00:12:27.139 } 00:12:27.139 } 00:12:27.139 ]' 00:12:27.139 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.397 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:27.397 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.397 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:27.397 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.397 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.397 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.397 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.655 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:12:27.655 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:12:28.589 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.589 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:28.589 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.589 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.589 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.589 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:28.589 08:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.589 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:28.589 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:28.589 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:12:28.589 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.589 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:28.589 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:28.589 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:28.589 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.589 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.589 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.589 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.589 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.589 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.589 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.589 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.160 00:12:29.160 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:29.160 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:29.160 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.418 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.418 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.418 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.418 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.418 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.418 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.418 { 00:12:29.418 "cntlid": 105, 00:12:29.418 "qid": 0, 00:12:29.418 "state": "enabled", 00:12:29.418 "thread": "nvmf_tgt_poll_group_000", 00:12:29.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:29.418 "listen_address": { 00:12:29.418 "trtype": "TCP", 00:12:29.418 "adrfam": "IPv4", 00:12:29.418 "traddr": "10.0.0.3", 00:12:29.418 "trsvcid": "4420" 00:12:29.418 }, 00:12:29.418 "peer_address": { 00:12:29.418 "trtype": "TCP", 00:12:29.418 "adrfam": "IPv4", 00:12:29.418 "traddr": "10.0.0.1", 00:12:29.418 "trsvcid": "38182" 00:12:29.418 }, 00:12:29.418 "auth": { 00:12:29.418 "state": "completed", 00:12:29.418 "digest": "sha512", 00:12:29.418 "dhgroup": "ffdhe2048" 00:12:29.418 } 00:12:29.418 } 00:12:29.418 ]' 00:12:29.418 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.418 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:29.418 08:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.418 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:29.418 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.418 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.418 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.418 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.986 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:12:29.986 08:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:12:30.552 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.552 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:30.552 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.552 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.552 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.552 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.552 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:30.552 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:30.811 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:12:30.811 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.811 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:30.811 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:30.811 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:30.811 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.811 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.811 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.811 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.811 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.811 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.811 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.811 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.070 00:12:31.070 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:31.070 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:31.070 08:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.329 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.329 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.329 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.329 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.329 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.329 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:31.329 { 00:12:31.329 "cntlid": 107, 00:12:31.329 "qid": 0, 00:12:31.329 "state": "enabled", 00:12:31.329 "thread": "nvmf_tgt_poll_group_000", 00:12:31.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:31.329 "listen_address": { 00:12:31.329 "trtype": "TCP", 00:12:31.329 "adrfam": "IPv4", 00:12:31.329 "traddr": "10.0.0.3", 00:12:31.329 "trsvcid": "4420" 00:12:31.329 }, 00:12:31.329 "peer_address": { 00:12:31.329 "trtype": "TCP", 00:12:31.329 "adrfam": "IPv4", 00:12:31.329 "traddr": "10.0.0.1", 00:12:31.329 "trsvcid": "38210" 00:12:31.329 }, 00:12:31.329 "auth": { 00:12:31.329 "state": "completed", 00:12:31.329 "digest": "sha512", 00:12:31.329 "dhgroup": "ffdhe2048" 00:12:31.329 } 00:12:31.329 } 00:12:31.329 ]' 00:12:31.329 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.587 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:31.587 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.587 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:31.587 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.587 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.587 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.587 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.846 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:12:31.846 08:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:12:32.784 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.784 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:32.784 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.784 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.784 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.784 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:32.784 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:32.784 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:33.043 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:12:33.043 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:33.043 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:33.043 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:33.043 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:33.043 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.044 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.044 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.044 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.044 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.044 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.044 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.044 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.302 00:12:33.302 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.302 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.302 08:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.561 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.561 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.561 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.561 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.561 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.561 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.561 { 00:12:33.561 "cntlid": 109, 00:12:33.561 "qid": 0, 00:12:33.561 "state": "enabled", 00:12:33.561 "thread": "nvmf_tgt_poll_group_000", 00:12:33.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:33.561 "listen_address": { 00:12:33.561 "trtype": "TCP", 00:12:33.561 "adrfam": "IPv4", 00:12:33.561 "traddr": "10.0.0.3", 00:12:33.561 "trsvcid": "4420" 00:12:33.561 }, 00:12:33.561 "peer_address": { 00:12:33.561 "trtype": "TCP", 00:12:33.561 "adrfam": "IPv4", 00:12:33.561 "traddr": "10.0.0.1", 00:12:33.561 "trsvcid": "38240" 00:12:33.561 }, 00:12:33.561 "auth": { 00:12:33.561 "state": "completed", 00:12:33.561 "digest": "sha512", 00:12:33.561 "dhgroup": "ffdhe2048" 00:12:33.561 } 00:12:33.561 } 00:12:33.561 ]' 00:12:33.561 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:33.561 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:33.561 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:33.819 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:33.819 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.819 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.819 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.819 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.078 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:12:34.078 08:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:12:34.645 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.645 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:34.645 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.645 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.645 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.645 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.645 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:34.645 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:34.904 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:12:34.904 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.904 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:34.904 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:34.904 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:34.904 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.904 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key3 00:12:34.904 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.904 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.904 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.904 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:34.904 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:34.905 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:35.471 00:12:35.471 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.471 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.471 08:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.729 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.729 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.729 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.729 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.729 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.729 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.729 { 00:12:35.729 "cntlid": 111, 00:12:35.729 "qid": 0, 00:12:35.729 "state": "enabled", 00:12:35.729 "thread": "nvmf_tgt_poll_group_000", 00:12:35.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:35.729 "listen_address": { 00:12:35.729 "trtype": "TCP", 00:12:35.729 "adrfam": "IPv4", 00:12:35.729 "traddr": "10.0.0.3", 00:12:35.729 "trsvcid": "4420" 00:12:35.729 }, 00:12:35.729 "peer_address": { 00:12:35.729 "trtype": "TCP", 00:12:35.729 "adrfam": "IPv4", 00:12:35.729 "traddr": "10.0.0.1", 00:12:35.729 "trsvcid": "38266" 00:12:35.729 }, 00:12:35.729 "auth": { 00:12:35.729 "state": "completed", 00:12:35.729 "digest": "sha512", 00:12:35.729 "dhgroup": "ffdhe2048" 00:12:35.729 } 00:12:35.729 } 00:12:35.729 ]' 00:12:35.729 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.729 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:35.729 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:35.729 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:35.730 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:35.730 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.730 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.730 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.312 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:12:36.313 08:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:12:36.921 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.921 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:36.921 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.921 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.921 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.921 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:36.921 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:36.921 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:36.921 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:37.180 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:12:37.180 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.180 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:37.180 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:37.180 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:37.180 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.180 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.180 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.180 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.180 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.180 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.180 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.180 08:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.439 00:12:37.439 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.439 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.439 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.007 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.007 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.007 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.007 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.007 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.007 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.007 { 00:12:38.007 "cntlid": 113, 00:12:38.007 "qid": 0, 00:12:38.007 "state": "enabled", 00:12:38.007 "thread": "nvmf_tgt_poll_group_000", 00:12:38.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:38.007 "listen_address": { 00:12:38.007 "trtype": "TCP", 00:12:38.007 "adrfam": "IPv4", 00:12:38.007 "traddr": "10.0.0.3", 00:12:38.007 "trsvcid": "4420" 00:12:38.007 }, 00:12:38.007 "peer_address": { 00:12:38.007 "trtype": "TCP", 00:12:38.007 "adrfam": "IPv4", 00:12:38.007 "traddr": "10.0.0.1", 00:12:38.007 "trsvcid": "33072" 00:12:38.007 }, 00:12:38.007 "auth": { 00:12:38.007 "state": "completed", 00:12:38.007 "digest": "sha512", 00:12:38.007 "dhgroup": "ffdhe3072" 00:12:38.007 } 00:12:38.007 } 00:12:38.007 ]' 00:12:38.007 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.007 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:38.007 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.007 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:38.007 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.007 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.007 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.007 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.267 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:12:38.267 08:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:12:38.834 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.834 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:38.835 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.835 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.835 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.835 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:38.835 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:38.835 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:39.402 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:12:39.402 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.402 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:39.402 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:39.402 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:39.402 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.402 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.402 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.402 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.402 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.402 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.402 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.402 08:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.661 00:12:39.661 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:39.661 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.661 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:39.919 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.919 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.919 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.919 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.919 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.919 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:39.919 { 00:12:39.919 "cntlid": 115, 00:12:39.919 "qid": 0, 00:12:39.919 "state": "enabled", 00:12:39.919 "thread": "nvmf_tgt_poll_group_000", 00:12:39.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:39.919 "listen_address": { 00:12:39.919 "trtype": "TCP", 00:12:39.919 "adrfam": "IPv4", 00:12:39.919 "traddr": "10.0.0.3", 00:12:39.919 "trsvcid": "4420" 00:12:39.919 }, 00:12:39.919 "peer_address": { 00:12:39.919 "trtype": "TCP", 00:12:39.919 "adrfam": "IPv4", 00:12:39.919 "traddr": "10.0.0.1", 00:12:39.919 "trsvcid": "33092" 00:12:39.919 }, 00:12:39.919 "auth": { 00:12:39.919 "state": "completed", 00:12:39.919 "digest": "sha512", 00:12:39.919 "dhgroup": "ffdhe3072" 00:12:39.919 } 00:12:39.919 } 00:12:39.919 ]' 00:12:39.919 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:39.919 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:39.919 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.178 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:40.178 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.178 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.178 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.178 08:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.437 08:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:12:40.437 08:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:12:41.004 08:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.263 08:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:41.263 08:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.263 08:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.263 08:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.263 08:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:41.263 08:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:41.263 08:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:41.522 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:12:41.522 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:41.522 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:41.522 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:41.522 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:41.522 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.522 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.522 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.522 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.522 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.522 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.522 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.522 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.781 00:12:41.781 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:41.781 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.781 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.349 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.349 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.349 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.349 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.349 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.349 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.349 { 00:12:42.349 "cntlid": 117, 00:12:42.349 "qid": 0, 00:12:42.349 "state": "enabled", 00:12:42.349 "thread": "nvmf_tgt_poll_group_000", 00:12:42.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:42.349 "listen_address": { 00:12:42.349 "trtype": "TCP", 00:12:42.349 "adrfam": "IPv4", 00:12:42.349 "traddr": "10.0.0.3", 00:12:42.349 "trsvcid": "4420" 00:12:42.349 }, 00:12:42.349 "peer_address": { 00:12:42.349 "trtype": "TCP", 00:12:42.349 "adrfam": "IPv4", 00:12:42.349 "traddr": "10.0.0.1", 00:12:42.349 "trsvcid": "33128" 00:12:42.349 }, 00:12:42.349 "auth": { 00:12:42.349 "state": "completed", 00:12:42.349 "digest": "sha512", 00:12:42.349 "dhgroup": "ffdhe3072" 00:12:42.349 } 00:12:42.349 } 00:12:42.349 ]' 00:12:42.349 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.349 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:42.349 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.349 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:42.349 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:42.349 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.349 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.349 08:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.607 08:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:12:42.607 08:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:12:43.542 08:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.542 08:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:43.542 08:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.542 08:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.542 08:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.542 08:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:43.542 08:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:43.542 08:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:43.542 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:12:43.542 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:43.542 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:43.542 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:43.542 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:43.542 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.542 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key3 00:12:43.542 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.542 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.801 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.801 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:43.801 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:43.801 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:44.060 00:12:44.060 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.060 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.060 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.319 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.319 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.319 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.319 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.319 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.319 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.319 { 00:12:44.319 "cntlid": 119, 00:12:44.319 "qid": 0, 00:12:44.319 "state": "enabled", 00:12:44.319 "thread": "nvmf_tgt_poll_group_000", 00:12:44.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:44.319 "listen_address": { 00:12:44.319 "trtype": "TCP", 00:12:44.319 "adrfam": "IPv4", 00:12:44.319 "traddr": "10.0.0.3", 00:12:44.319 "trsvcid": "4420" 00:12:44.319 }, 00:12:44.319 "peer_address": { 00:12:44.319 "trtype": "TCP", 00:12:44.319 "adrfam": "IPv4", 00:12:44.319 "traddr": "10.0.0.1", 00:12:44.319 "trsvcid": "33146" 00:12:44.319 }, 00:12:44.319 "auth": { 00:12:44.319 "state": "completed", 00:12:44.319 "digest": "sha512", 00:12:44.319 "dhgroup": "ffdhe3072" 00:12:44.319 } 00:12:44.319 } 00:12:44.319 ]' 00:12:44.319 08:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:44.578 08:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:44.578 08:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:44.578 08:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:44.578 08:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:44.578 08:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.578 08:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.578 08:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.836 08:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:12:44.836 08:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:12:45.403 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.403 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:45.403 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.403 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.403 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.403 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:45.403 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:45.403 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:45.403 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:45.663 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:12:45.663 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:45.663 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:45.663 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:45.663 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:45.663 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.663 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.663 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.663 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.663 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.663 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.663 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.663 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:46.229 00:12:46.229 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.229 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.229 08:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.488 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.488 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.488 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.488 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.488 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.488 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.488 { 00:12:46.488 "cntlid": 121, 00:12:46.488 "qid": 0, 00:12:46.488 "state": "enabled", 00:12:46.488 "thread": "nvmf_tgt_poll_group_000", 00:12:46.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:46.488 "listen_address": { 00:12:46.488 "trtype": "TCP", 00:12:46.488 "adrfam": "IPv4", 00:12:46.488 "traddr": "10.0.0.3", 00:12:46.488 "trsvcid": "4420" 00:12:46.488 }, 00:12:46.488 "peer_address": { 00:12:46.488 "trtype": "TCP", 00:12:46.488 "adrfam": "IPv4", 00:12:46.488 "traddr": "10.0.0.1", 00:12:46.488 "trsvcid": "33174" 00:12:46.488 }, 00:12:46.488 "auth": { 00:12:46.488 "state": "completed", 00:12:46.488 "digest": "sha512", 00:12:46.488 "dhgroup": "ffdhe4096" 00:12:46.488 } 00:12:46.488 } 00:12:46.488 ]' 00:12:46.488 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.488 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:46.488 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:46.488 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:46.488 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.488 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.488 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.488 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.747 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:12:46.747 08:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:12:47.682 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.682 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:47.682 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.682 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.682 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.682 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:47.682 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:47.682 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:47.941 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:12:47.941 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:47.941 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:47.941 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:47.941 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:47.941 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.941 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.941 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.941 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.941 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.941 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.941 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.941 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:48.199 00:12:48.199 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:48.199 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.199 08:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.457 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.457 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.457 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.457 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.457 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.457 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.457 { 00:12:48.457 "cntlid": 123, 00:12:48.457 "qid": 0, 00:12:48.457 "state": "enabled", 00:12:48.457 "thread": "nvmf_tgt_poll_group_000", 00:12:48.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:48.457 "listen_address": { 00:12:48.457 "trtype": "TCP", 00:12:48.457 "adrfam": "IPv4", 00:12:48.457 "traddr": "10.0.0.3", 00:12:48.457 "trsvcid": "4420" 00:12:48.457 }, 00:12:48.457 "peer_address": { 00:12:48.457 "trtype": "TCP", 00:12:48.457 "adrfam": "IPv4", 00:12:48.457 "traddr": "10.0.0.1", 00:12:48.457 "trsvcid": "46106" 00:12:48.457 }, 00:12:48.457 "auth": { 00:12:48.457 "state": "completed", 00:12:48.457 "digest": "sha512", 00:12:48.457 "dhgroup": "ffdhe4096" 00:12:48.457 } 00:12:48.457 } 00:12:48.457 ]' 00:12:48.457 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:48.716 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:48.716 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:48.716 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:48.716 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:48.716 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.716 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.716 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.974 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:12:48.974 08:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:12:49.541 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.541 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:49.541 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.541 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.541 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.541 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:49.541 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:49.541 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:49.800 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:12:49.800 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:49.800 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:49.800 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:49.800 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:49.800 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.800 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.800 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.800 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.800 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.800 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.800 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.800 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:50.367 00:12:50.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.367 08:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.625 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.625 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.625 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.625 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.625 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.625 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:50.625 { 00:12:50.625 "cntlid": 125, 00:12:50.625 "qid": 0, 00:12:50.625 "state": "enabled", 00:12:50.625 "thread": "nvmf_tgt_poll_group_000", 00:12:50.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:50.625 "listen_address": { 00:12:50.625 "trtype": "TCP", 00:12:50.625 "adrfam": "IPv4", 00:12:50.625 "traddr": "10.0.0.3", 00:12:50.625 "trsvcid": "4420" 00:12:50.625 }, 00:12:50.625 "peer_address": { 00:12:50.625 "trtype": "TCP", 00:12:50.625 "adrfam": "IPv4", 00:12:50.625 "traddr": "10.0.0.1", 00:12:50.625 "trsvcid": "46130" 00:12:50.625 }, 00:12:50.625 "auth": { 00:12:50.625 "state": "completed", 00:12:50.625 "digest": "sha512", 00:12:50.625 "dhgroup": "ffdhe4096" 00:12:50.625 } 00:12:50.625 } 00:12:50.625 ]' 00:12:50.625 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:50.625 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:50.625 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:50.625 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:50.625 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:50.625 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.625 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.625 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.888 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:12:50.888 08:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:12:51.825 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.825 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:51.825 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.825 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.825 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.825 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:51.825 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:51.825 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:52.084 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:12:52.084 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.084 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:52.084 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:52.084 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:52.084 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.084 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key3 00:12:52.084 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.084 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.084 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.084 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:52.084 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:52.084 08:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:52.343 00:12:52.343 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:52.343 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.343 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.911 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.911 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.911 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.911 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.911 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.911 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:52.911 { 00:12:52.911 "cntlid": 127, 00:12:52.911 "qid": 0, 00:12:52.911 "state": "enabled", 00:12:52.911 "thread": "nvmf_tgt_poll_group_000", 00:12:52.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:52.911 "listen_address": { 00:12:52.911 "trtype": "TCP", 00:12:52.911 "adrfam": "IPv4", 00:12:52.911 "traddr": "10.0.0.3", 00:12:52.911 "trsvcid": "4420" 00:12:52.911 }, 00:12:52.911 "peer_address": { 00:12:52.911 "trtype": "TCP", 00:12:52.911 "adrfam": "IPv4", 00:12:52.911 "traddr": "10.0.0.1", 00:12:52.911 "trsvcid": "46168" 00:12:52.911 }, 00:12:52.911 "auth": { 00:12:52.911 "state": "completed", 00:12:52.911 "digest": "sha512", 00:12:52.911 "dhgroup": "ffdhe4096" 00:12:52.911 } 00:12:52.911 } 00:12:52.911 ]' 00:12:52.911 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:52.911 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:52.911 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:52.911 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:52.911 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:52.911 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.911 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.911 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.169 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:12:53.169 08:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:12:54.105 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.105 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.105 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:54.105 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.105 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.105 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.105 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:54.105 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.105 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:54.105 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:54.105 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:12:54.105 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.105 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:54.105 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:54.105 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:54.105 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.105 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.105 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.105 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.105 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.105 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.105 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.105 08:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.673 00:12:54.673 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:54.673 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:54.673 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.931 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.931 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.931 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.931 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.931 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.931 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:54.931 { 00:12:54.931 "cntlid": 129, 00:12:54.931 "qid": 0, 00:12:54.931 "state": "enabled", 00:12:54.931 "thread": "nvmf_tgt_poll_group_000", 00:12:54.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:54.931 "listen_address": { 00:12:54.931 "trtype": "TCP", 00:12:54.931 "adrfam": "IPv4", 00:12:54.931 "traddr": "10.0.0.3", 00:12:54.931 "trsvcid": "4420" 00:12:54.931 }, 00:12:54.931 "peer_address": { 00:12:54.931 "trtype": "TCP", 00:12:54.931 "adrfam": "IPv4", 00:12:54.931 "traddr": "10.0.0.1", 00:12:54.931 "trsvcid": "46200" 00:12:54.931 }, 00:12:54.931 "auth": { 00:12:54.931 "state": "completed", 00:12:54.931 "digest": "sha512", 00:12:54.931 "dhgroup": "ffdhe6144" 00:12:54.931 } 00:12:54.931 } 00:12:54.931 ]' 00:12:54.931 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:54.931 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:54.932 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.190 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:55.190 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.190 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.190 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.190 08:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.448 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:12:55.448 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:12:56.016 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.016 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:56.016 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.016 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.016 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.016 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.016 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:56.016 08:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:56.582 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:12:56.582 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.582 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:56.582 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:56.582 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:56.583 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.583 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.583 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.583 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.583 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.583 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.583 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.583 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.841 00:12:56.841 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.841 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.841 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.099 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.099 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.099 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.099 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.099 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.099 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.099 { 00:12:57.099 "cntlid": 131, 00:12:57.099 "qid": 0, 00:12:57.099 "state": "enabled", 00:12:57.099 "thread": "nvmf_tgt_poll_group_000", 00:12:57.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:57.099 "listen_address": { 00:12:57.099 "trtype": "TCP", 00:12:57.099 "adrfam": "IPv4", 00:12:57.099 "traddr": "10.0.0.3", 00:12:57.099 "trsvcid": "4420" 00:12:57.099 }, 00:12:57.099 "peer_address": { 00:12:57.099 "trtype": "TCP", 00:12:57.099 "adrfam": "IPv4", 00:12:57.099 "traddr": "10.0.0.1", 00:12:57.099 "trsvcid": "43082" 00:12:57.099 }, 00:12:57.099 "auth": { 00:12:57.099 "state": "completed", 00:12:57.099 "digest": "sha512", 00:12:57.099 "dhgroup": "ffdhe6144" 00:12:57.099 } 00:12:57.099 } 00:12:57.099 ]' 00:12:57.099 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.357 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:57.357 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.357 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:57.357 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.357 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.357 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.357 08:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.615 08:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:12:57.615 08:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:12:58.551 08:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.551 08:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:12:58.551 08:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.551 08:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.551 08:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.551 08:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.551 08:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:58.551 08:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:58.551 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:12:58.551 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.551 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:58.551 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:58.551 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:58.551 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.551 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.551 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.552 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.809 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.809 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.809 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.809 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:59.067 00:12:59.324 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:59.324 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.324 08:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:59.582 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.582 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.582 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.582 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.583 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.583 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:59.583 { 00:12:59.583 "cntlid": 133, 00:12:59.583 "qid": 0, 00:12:59.583 "state": "enabled", 00:12:59.583 "thread": "nvmf_tgt_poll_group_000", 00:12:59.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:12:59.583 "listen_address": { 00:12:59.583 "trtype": "TCP", 00:12:59.583 "adrfam": "IPv4", 00:12:59.583 "traddr": "10.0.0.3", 00:12:59.583 "trsvcid": "4420" 00:12:59.583 }, 00:12:59.583 "peer_address": { 00:12:59.583 "trtype": "TCP", 00:12:59.583 "adrfam": "IPv4", 00:12:59.583 "traddr": "10.0.0.1", 00:12:59.583 "trsvcid": "43100" 00:12:59.583 }, 00:12:59.583 "auth": { 00:12:59.583 "state": "completed", 00:12:59.583 "digest": "sha512", 00:12:59.583 "dhgroup": "ffdhe6144" 00:12:59.583 } 00:12:59.583 } 00:12:59.583 ]' 00:12:59.583 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:59.583 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:59.583 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:59.583 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:59.583 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.583 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.583 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.583 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.150 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:13:00.150 08:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:13:00.718 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.718 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:13:00.718 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.718 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.718 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.718 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:00.718 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:00.718 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:00.978 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:13:00.978 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.978 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:00.978 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:00.978 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:00.978 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.978 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key3 00:13:00.978 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.978 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.978 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.978 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:00.978 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:00.978 08:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:01.547 00:13:01.547 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:01.547 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.547 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:01.805 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.805 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.805 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.805 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.805 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.805 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:01.805 { 00:13:01.805 "cntlid": 135, 00:13:01.805 "qid": 0, 00:13:01.805 "state": "enabled", 00:13:01.805 "thread": "nvmf_tgt_poll_group_000", 00:13:01.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:13:01.805 "listen_address": { 00:13:01.805 "trtype": "TCP", 00:13:01.805 "adrfam": "IPv4", 00:13:01.805 "traddr": "10.0.0.3", 00:13:01.805 "trsvcid": "4420" 00:13:01.805 }, 00:13:01.805 "peer_address": { 00:13:01.805 "trtype": "TCP", 00:13:01.805 "adrfam": "IPv4", 00:13:01.805 "traddr": "10.0.0.1", 00:13:01.805 "trsvcid": "43122" 00:13:01.805 }, 00:13:01.805 "auth": { 00:13:01.805 "state": "completed", 00:13:01.805 "digest": "sha512", 00:13:01.805 "dhgroup": "ffdhe6144" 00:13:01.805 } 00:13:01.805 } 00:13:01.805 ]' 00:13:01.806 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.063 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:02.063 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.063 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:02.064 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.064 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.064 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.064 08:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.322 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:13:02.322 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:13:03.255 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.255 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:13:03.255 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.255 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.255 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.255 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:03.255 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.255 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:03.255 08:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:03.514 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:13:03.514 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.514 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:03.514 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:03.514 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:03.514 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.514 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.514 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.514 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.514 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.514 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.514 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.514 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.081 00:13:04.081 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.081 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.081 08:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.340 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.340 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.340 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.340 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.649 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.649 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.649 { 00:13:04.649 "cntlid": 137, 00:13:04.649 "qid": 0, 00:13:04.649 "state": "enabled", 00:13:04.649 "thread": "nvmf_tgt_poll_group_000", 00:13:04.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:13:04.649 "listen_address": { 00:13:04.649 "trtype": "TCP", 00:13:04.649 "adrfam": "IPv4", 00:13:04.649 "traddr": "10.0.0.3", 00:13:04.649 "trsvcid": "4420" 00:13:04.649 }, 00:13:04.649 "peer_address": { 00:13:04.649 "trtype": "TCP", 00:13:04.649 "adrfam": "IPv4", 00:13:04.649 "traddr": "10.0.0.1", 00:13:04.649 "trsvcid": "43148" 00:13:04.649 }, 00:13:04.649 "auth": { 00:13:04.649 "state": "completed", 00:13:04.649 "digest": "sha512", 00:13:04.649 "dhgroup": "ffdhe8192" 00:13:04.649 } 00:13:04.649 } 00:13:04.649 ]' 00:13:04.649 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.649 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:04.649 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.649 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:04.649 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.649 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.649 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.649 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.934 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:13:04.934 08:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:13:05.870 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.870 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:13:05.870 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.870 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.870 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.870 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.870 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:05.870 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:06.130 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:13:06.130 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:06.130 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:06.130 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:06.130 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:06.130 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.130 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.130 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.130 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.130 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.130 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.130 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.130 08:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.697 00:13:06.697 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.697 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.697 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.956 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.956 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.956 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.956 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.956 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.956 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.956 { 00:13:06.956 "cntlid": 139, 00:13:06.956 "qid": 0, 00:13:06.956 "state": "enabled", 00:13:06.956 "thread": "nvmf_tgt_poll_group_000", 00:13:06.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:13:06.956 "listen_address": { 00:13:06.956 "trtype": "TCP", 00:13:06.956 "adrfam": "IPv4", 00:13:06.956 "traddr": "10.0.0.3", 00:13:06.956 "trsvcid": "4420" 00:13:06.956 }, 00:13:06.956 "peer_address": { 00:13:06.956 "trtype": "TCP", 00:13:06.956 "adrfam": "IPv4", 00:13:06.956 "traddr": "10.0.0.1", 00:13:06.956 "trsvcid": "43178" 00:13:06.956 }, 00:13:06.956 "auth": { 00:13:06.956 "state": "completed", 00:13:06.956 "digest": "sha512", 00:13:06.956 "dhgroup": "ffdhe8192" 00:13:06.956 } 00:13:06.956 } 00:13:06.956 ]' 00:13:06.956 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.956 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:06.956 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:07.215 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:07.215 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:07.215 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.215 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.215 08:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.473 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:13:07.473 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: --dhchap-ctrl-secret DHHC-1:02:ZjBlZjQ0ZGJiY2QzYTVhODc3MzgyMTk1ZDEzZjE5Mjg2YTU4MjYyZjAwM2VmYWJi0fVyDQ==: 00:13:08.480 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.481 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:13:08.481 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.481 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.481 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.481 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:08.481 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:08.481 08:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:08.481 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:13:08.481 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.481 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:08.481 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:08.481 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:08.481 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.481 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.481 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.481 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.481 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.481 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.481 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.481 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:09.416 00:13:09.416 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:09.416 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:09.416 08:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.416 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.416 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.416 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.416 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.416 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.416 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.416 { 00:13:09.416 "cntlid": 141, 00:13:09.416 "qid": 0, 00:13:09.416 "state": "enabled", 00:13:09.416 "thread": "nvmf_tgt_poll_group_000", 00:13:09.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:13:09.416 "listen_address": { 00:13:09.416 "trtype": "TCP", 00:13:09.416 "adrfam": "IPv4", 00:13:09.416 "traddr": "10.0.0.3", 00:13:09.416 "trsvcid": "4420" 00:13:09.416 }, 00:13:09.416 "peer_address": { 00:13:09.416 "trtype": "TCP", 00:13:09.416 "adrfam": "IPv4", 00:13:09.416 "traddr": "10.0.0.1", 00:13:09.416 "trsvcid": "44034" 00:13:09.416 }, 00:13:09.416 "auth": { 00:13:09.416 "state": "completed", 00:13:09.416 "digest": "sha512", 00:13:09.416 "dhgroup": "ffdhe8192" 00:13:09.416 } 00:13:09.416 } 00:13:09.416 ]' 00:13:09.416 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.676 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:09.676 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.676 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:09.676 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.676 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.676 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.676 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.006 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:13:10.006 08:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:01:NThkYTUwMTllOTBjMzVkMTBlODU5MWQ0YjRmNTY1YTRJVb4H: 00:13:10.572 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.572 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:13:10.572 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.572 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.572 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.572 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.572 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:10.572 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:11.140 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:13:11.140 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:11.140 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:11.140 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:11.140 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:11.140 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.140 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key3 00:13:11.140 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.140 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.140 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.140 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:11.140 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:11.140 08:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:11.707 00:13:11.707 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:11.707 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.707 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.966 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.966 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.966 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.966 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.966 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.966 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.966 { 00:13:11.966 "cntlid": 143, 00:13:11.966 "qid": 0, 00:13:11.966 "state": "enabled", 00:13:11.966 "thread": "nvmf_tgt_poll_group_000", 00:13:11.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:13:11.966 "listen_address": { 00:13:11.966 "trtype": "TCP", 00:13:11.966 "adrfam": "IPv4", 00:13:11.966 "traddr": "10.0.0.3", 00:13:11.966 "trsvcid": "4420" 00:13:11.966 }, 00:13:11.966 "peer_address": { 00:13:11.966 "trtype": "TCP", 00:13:11.966 "adrfam": "IPv4", 00:13:11.966 "traddr": "10.0.0.1", 00:13:11.966 "trsvcid": "44052" 00:13:11.966 }, 00:13:11.966 "auth": { 00:13:11.966 "state": "completed", 00:13:11.966 "digest": "sha512", 00:13:11.966 "dhgroup": "ffdhe8192" 00:13:11.966 } 00:13:11.966 } 00:13:11.966 ]' 00:13:11.966 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.966 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:11.966 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.966 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:11.966 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:12.225 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.225 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.225 08:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.484 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:13:12.484 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:13:13.052 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.052 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:13:13.052 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.052 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.052 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.052 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:13.052 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:13:13.052 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:13.052 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:13.052 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:13.052 08:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:13.325 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:13:13.325 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:13.325 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:13.325 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:13.325 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:13.325 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.325 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.325 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.325 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.325 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.325 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.325 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.325 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:14.270 00:13:14.270 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:14.270 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.270 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:14.270 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.270 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.270 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.270 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.270 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.270 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:14.270 { 00:13:14.270 "cntlid": 145, 00:13:14.270 "qid": 0, 00:13:14.270 "state": "enabled", 00:13:14.270 "thread": "nvmf_tgt_poll_group_000", 00:13:14.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:13:14.270 "listen_address": { 00:13:14.270 "trtype": "TCP", 00:13:14.270 "adrfam": "IPv4", 00:13:14.270 "traddr": "10.0.0.3", 00:13:14.270 "trsvcid": "4420" 00:13:14.270 }, 00:13:14.270 "peer_address": { 00:13:14.270 "trtype": "TCP", 00:13:14.270 "adrfam": "IPv4", 00:13:14.270 "traddr": "10.0.0.1", 00:13:14.270 "trsvcid": "44076" 00:13:14.270 }, 00:13:14.270 "auth": { 00:13:14.270 "state": "completed", 00:13:14.270 "digest": "sha512", 00:13:14.270 "dhgroup": "ffdhe8192" 00:13:14.270 } 00:13:14.270 } 00:13:14.270 ]' 00:13:14.270 08:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:14.528 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:14.528 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:14.528 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:14.528 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:14.528 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.528 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.528 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.788 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:13:14.788 08:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:00:OGVkZmQzYzQyMTc0NTY0ZDQwZDNmYjU1YmQ3YmEzZGZlZTJjNTcyZTY4NDdiZjM11WdBfw==: --dhchap-ctrl-secret DHHC-1:03:ZWM4OWZlZDk1OGQ2M2JmMThkYThhOTk2OTllZTY3MjJhNjRhZTI5NjJkNjk2Yjk4NTZhMmNiMzRlYzBjODhiZcnGtYA=: 00:13:15.725 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.725 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:13:15.725 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.725 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.725 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.725 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 00:13:15.725 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.725 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.725 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.725 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:13:15.725 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:15.725 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:13:15.725 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:15.725 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:15.725 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:15.725 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:15.725 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:13:15.725 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:15.725 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:16.306 request: 00:13:16.306 { 00:13:16.306 "name": "nvme0", 00:13:16.306 "trtype": "tcp", 00:13:16.306 "traddr": "10.0.0.3", 00:13:16.306 "adrfam": "ipv4", 00:13:16.306 "trsvcid": "4420", 00:13:16.306 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:16.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:13:16.306 "prchk_reftag": false, 00:13:16.306 "prchk_guard": false, 00:13:16.306 "hdgst": false, 00:13:16.306 "ddgst": false, 00:13:16.306 "dhchap_key": "key2", 00:13:16.306 "allow_unrecognized_csi": false, 00:13:16.307 "method": "bdev_nvme_attach_controller", 00:13:16.307 "req_id": 1 00:13:16.307 } 00:13:16.307 Got JSON-RPC error response 00:13:16.307 response: 00:13:16.307 { 00:13:16.307 "code": -5, 00:13:16.307 "message": "Input/output error" 00:13:16.307 } 00:13:16.307 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:16.307 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:16.307 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:16.307 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:16.307 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:13:16.307 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.307 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.307 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.307 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.307 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.307 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.307 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.307 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:16.307 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:16.307 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:16.307 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:16.307 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.307 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:16.307 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.307 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:16.307 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:16.307 08:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:16.879 request: 00:13:16.879 { 00:13:16.879 "name": "nvme0", 00:13:16.879 "trtype": "tcp", 00:13:16.879 "traddr": "10.0.0.3", 00:13:16.879 "adrfam": "ipv4", 00:13:16.879 "trsvcid": "4420", 00:13:16.879 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:16.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:13:16.879 "prchk_reftag": false, 00:13:16.879 "prchk_guard": false, 00:13:16.879 "hdgst": false, 00:13:16.879 "ddgst": false, 00:13:16.879 "dhchap_key": "key1", 00:13:16.879 "dhchap_ctrlr_key": "ckey2", 00:13:16.879 "allow_unrecognized_csi": false, 00:13:16.879 "method": "bdev_nvme_attach_controller", 00:13:16.879 "req_id": 1 00:13:16.879 } 00:13:16.879 Got JSON-RPC error response 00:13:16.879 response: 00:13:16.879 { 00:13:16.879 "code": -5, 00:13:16.879 "message": "Input/output error" 00:13:16.879 } 00:13:16.879 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:16.879 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:16.879 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:16.879 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:16.879 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:13:16.879 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.879 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.879 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.879 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 00:13:16.879 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.879 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.879 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.879 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.879 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:16.879 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.879 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:16.879 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.879 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:16.879 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.879 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.879 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.879 08:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.446 request: 00:13:17.446 { 00:13:17.446 "name": "nvme0", 00:13:17.446 "trtype": "tcp", 00:13:17.446 "traddr": "10.0.0.3", 00:13:17.446 "adrfam": "ipv4", 00:13:17.446 "trsvcid": "4420", 00:13:17.446 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:17.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:13:17.446 "prchk_reftag": false, 00:13:17.446 "prchk_guard": false, 00:13:17.446 "hdgst": false, 00:13:17.446 "ddgst": false, 00:13:17.446 "dhchap_key": "key1", 00:13:17.446 "dhchap_ctrlr_key": "ckey1", 00:13:17.446 "allow_unrecognized_csi": false, 00:13:17.446 "method": "bdev_nvme_attach_controller", 00:13:17.446 "req_id": 1 00:13:17.446 } 00:13:17.446 Got JSON-RPC error response 00:13:17.446 response: 00:13:17.446 { 00:13:17.446 "code": -5, 00:13:17.446 "message": "Input/output error" 00:13:17.446 } 00:13:17.446 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:17.446 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:17.446 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:17.446 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:17.446 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:13:17.446 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.446 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.446 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.446 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67675 00:13:17.446 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 67675 ']' 00:13:17.446 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 67675 00:13:17.446 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:17.446 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:17.446 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67675 00:13:17.446 killing process with pid 67675 00:13:17.446 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:17.446 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:17.446 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67675' 00:13:17.446 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 67675 00:13:17.446 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 67675 00:13:17.794 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:17.794 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:17.794 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:17.794 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.794 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=70842 00:13:17.794 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:17.794 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 70842 00:13:17.794 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70842 ']' 00:13:17.794 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.794 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:17.794 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.794 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:17.794 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.053 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:18.053 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:18.053 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:18.053 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:18.053 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.053 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.053 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:18.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.053 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70842 00:13:18.053 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70842 ']' 00:13:18.053 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.053 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:18.053 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.053 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:18.053 08:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.621 null0 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.76F 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.PB9 ]] 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PB9 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.faq 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.6Ox ]] 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6Ox 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.6Ev 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.XLg ]] 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XLg 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.86p 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key3 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.621 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.622 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:18.622 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:18.622 08:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:19.557 nvme0n1 00:13:19.557 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.557 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.557 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:20.124 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.124 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.124 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.124 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.125 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.125 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:20.125 { 00:13:20.125 "cntlid": 1, 00:13:20.125 "qid": 0, 00:13:20.125 "state": "enabled", 00:13:20.125 "thread": "nvmf_tgt_poll_group_000", 00:13:20.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:13:20.125 "listen_address": { 00:13:20.125 "trtype": "TCP", 00:13:20.125 "adrfam": "IPv4", 00:13:20.125 "traddr": "10.0.0.3", 00:13:20.125 "trsvcid": "4420" 00:13:20.125 }, 00:13:20.125 "peer_address": { 00:13:20.125 "trtype": "TCP", 00:13:20.125 "adrfam": "IPv4", 00:13:20.125 "traddr": "10.0.0.1", 00:13:20.125 "trsvcid": "58750" 00:13:20.125 }, 00:13:20.125 "auth": { 00:13:20.125 "state": "completed", 00:13:20.125 "digest": "sha512", 00:13:20.125 "dhgroup": "ffdhe8192" 00:13:20.125 } 00:13:20.125 } 00:13:20.125 ]' 00:13:20.125 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.125 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:20.125 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:20.125 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:20.125 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:20.125 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.125 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.125 08:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.383 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:13:20.383 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:13:21.322 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.322 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:13:21.322 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.322 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.322 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.322 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key3 00:13:21.322 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.322 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.322 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.322 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:21.322 08:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:21.580 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:21.580 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:21.580 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:21.580 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:21.580 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:21.581 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:21.581 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:21.581 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:21.581 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:21.581 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:21.838 request: 00:13:21.838 { 00:13:21.838 "name": "nvme0", 00:13:21.838 "trtype": "tcp", 00:13:21.838 "traddr": "10.0.0.3", 00:13:21.838 "adrfam": "ipv4", 00:13:21.838 "trsvcid": "4420", 00:13:21.838 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:21.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:13:21.838 "prchk_reftag": false, 00:13:21.838 "prchk_guard": false, 00:13:21.838 "hdgst": false, 00:13:21.838 "ddgst": false, 00:13:21.838 "dhchap_key": "key3", 00:13:21.838 "allow_unrecognized_csi": false, 00:13:21.838 "method": "bdev_nvme_attach_controller", 00:13:21.838 "req_id": 1 00:13:21.838 } 00:13:21.838 Got JSON-RPC error response 00:13:21.838 response: 00:13:21.838 { 00:13:21.838 "code": -5, 00:13:21.838 "message": "Input/output error" 00:13:21.838 } 00:13:22.097 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:22.097 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:22.097 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:22.097 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:22.097 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:22.097 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:22.097 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:22.097 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:22.355 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:22.356 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:22.356 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:22.356 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:22.356 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.356 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:22.356 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.356 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:22.356 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:22.356 08:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:22.614 request: 00:13:22.614 { 00:13:22.614 "name": "nvme0", 00:13:22.614 "trtype": "tcp", 00:13:22.614 "traddr": "10.0.0.3", 00:13:22.614 "adrfam": "ipv4", 00:13:22.614 "trsvcid": "4420", 00:13:22.614 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:22.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:13:22.614 "prchk_reftag": false, 00:13:22.614 "prchk_guard": false, 00:13:22.614 "hdgst": false, 00:13:22.614 "ddgst": false, 00:13:22.614 "dhchap_key": "key3", 00:13:22.614 "allow_unrecognized_csi": false, 00:13:22.614 "method": "bdev_nvme_attach_controller", 00:13:22.614 "req_id": 1 00:13:22.614 } 00:13:22.614 Got JSON-RPC error response 00:13:22.614 response: 00:13:22.614 { 00:13:22.614 "code": -5, 00:13:22.614 "message": "Input/output error" 00:13:22.614 } 00:13:22.614 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:22.614 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:22.614 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:22.614 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:22.614 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:22.615 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:22.615 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:22.615 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:22.615 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:22.615 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:22.873 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:13:22.873 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.873 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.873 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.873 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:13:22.873 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.873 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.873 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.873 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:22.873 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:22.873 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:22.873 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:22.873 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.873 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:22.873 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.873 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:22.873 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:22.873 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:23.440 request: 00:13:23.440 { 00:13:23.440 "name": "nvme0", 00:13:23.440 "trtype": "tcp", 00:13:23.440 "traddr": "10.0.0.3", 00:13:23.440 "adrfam": "ipv4", 00:13:23.440 "trsvcid": "4420", 00:13:23.440 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:23.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:13:23.440 "prchk_reftag": false, 00:13:23.440 "prchk_guard": false, 00:13:23.440 "hdgst": false, 00:13:23.440 "ddgst": false, 00:13:23.440 "dhchap_key": "key0", 00:13:23.440 "dhchap_ctrlr_key": "key1", 00:13:23.440 "allow_unrecognized_csi": false, 00:13:23.440 "method": "bdev_nvme_attach_controller", 00:13:23.440 "req_id": 1 00:13:23.440 } 00:13:23.440 Got JSON-RPC error response 00:13:23.440 response: 00:13:23.440 { 00:13:23.440 "code": -5, 00:13:23.440 "message": "Input/output error" 00:13:23.440 } 00:13:23.440 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:23.440 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:23.440 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:23.440 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:23.440 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:23.440 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:23.440 08:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:23.699 nvme0n1 00:13:23.699 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:23.699 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:13:23.699 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.958 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.958 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.958 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.216 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 00:13:24.216 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.216 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.216 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.216 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:24.216 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:24.216 08:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:25.156 nvme0n1 00:13:25.415 08:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:13:25.415 08:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:13:25.415 08:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.673 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.673 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:25.673 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.673 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.673 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.673 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:13:25.673 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:13:25.673 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.932 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.932 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:13:25.932 08:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -l 0 --dhchap-secret DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: --dhchap-ctrl-secret DHHC-1:03:NWZjYjRiY2VhYTRhYzBlMWEwYzhjOTE3MWI5YjViNDRiNDI2N2I1OGVhZWU2N2NiMmUwMzQyYjlhMzQzNmYzMNqQvv8=: 00:13:26.498 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:13:26.498 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:13:26.498 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:13:26.498 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:13:26.498 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:13:26.498 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:13:26.498 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:13:26.498 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.498 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.756 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:13:26.756 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:26.756 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:13:26.756 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:26.756 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.756 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:26.756 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.756 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:26.756 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:26.756 08:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:27.693 request: 00:13:27.693 { 00:13:27.693 "name": "nvme0", 00:13:27.693 "trtype": "tcp", 00:13:27.693 "traddr": "10.0.0.3", 00:13:27.693 "adrfam": "ipv4", 00:13:27.693 "trsvcid": "4420", 00:13:27.693 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:27.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7", 00:13:27.693 "prchk_reftag": false, 00:13:27.693 "prchk_guard": false, 00:13:27.693 "hdgst": false, 00:13:27.693 "ddgst": false, 00:13:27.693 "dhchap_key": "key1", 00:13:27.693 "allow_unrecognized_csi": false, 00:13:27.693 "method": "bdev_nvme_attach_controller", 00:13:27.693 "req_id": 1 00:13:27.693 } 00:13:27.693 Got JSON-RPC error response 00:13:27.693 response: 00:13:27.693 { 00:13:27.693 "code": -5, 00:13:27.693 "message": "Input/output error" 00:13:27.693 } 00:13:27.693 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:27.693 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:27.693 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:27.693 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:27.693 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:27.693 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:27.693 08:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:28.651 nvme0n1 00:13:28.651 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:13:28.651 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:13:28.651 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.909 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.910 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.910 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.169 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:13:29.169 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.169 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.169 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.169 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:13:29.169 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:29.169 08:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:29.427 nvme0n1 00:13:29.427 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:13:29.427 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:13:29.427 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.685 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.685 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.685 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.252 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:30.252 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.252 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.252 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.252 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: '' 2s 00:13:30.252 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:30.252 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:30.252 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: 00:13:30.252 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:13:30.252 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:30.252 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:30.252 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: ]] 00:13:30.252 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NWZlNzkzNTE4NDY2YmM3MGUwYjUwYWM1MzRjYmExNDSuj1pZ: 00:13:30.252 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:13:30.252 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:30.252 08:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key1 --dhchap-ctrlr-key key2 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: 2s 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: ]] 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZGE3MGEzNzNlZTY5NDdlNGU5NzdlMmQzNmJkMTczM2QwNjZhNGYyN2MyMDM5MzA5riGQCQ==: 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:32.153 08:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:34.684 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:13:34.684 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:13:34.684 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:13:34.684 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:13:34.684 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:13:34.684 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:13:34.684 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:13:34.684 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.684 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:34.684 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.684 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.684 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.684 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:34.684 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:34.684 08:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:35.252 nvme0n1 00:13:35.252 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:35.252 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.252 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.252 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.252 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:35.252 08:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:35.820 08:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:13:35.820 08:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.820 08:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:13:36.387 08:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.387 08:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:13:36.387 08:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.387 08:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.387 08:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.387 08:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:13:36.387 08:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:13:36.645 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:13:36.645 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:13:36.645 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.903 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.904 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:36.904 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.904 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.904 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.904 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:36.904 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:36.904 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:36.904 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:36.904 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.904 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:36.904 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:36.904 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:36.904 08:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:37.469 request: 00:13:37.469 { 00:13:37.469 "name": "nvme0", 00:13:37.469 "dhchap_key": "key1", 00:13:37.469 "dhchap_ctrlr_key": "key3", 00:13:37.469 "method": "bdev_nvme_set_keys", 00:13:37.469 "req_id": 1 00:13:37.469 } 00:13:37.469 Got JSON-RPC error response 00:13:37.469 response: 00:13:37.469 { 00:13:37.469 "code": -13, 00:13:37.469 "message": "Permission denied" 00:13:37.469 } 00:13:37.469 08:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:37.469 08:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:37.470 08:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:37.470 08:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:37.470 08:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:37.470 08:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:37.470 08:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.728 08:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:13:37.728 08:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:13:39.117 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:39.117 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:39.117 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.117 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:13:39.117 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:39.117 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.117 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.117 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.117 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:39.117 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:39.117 08:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:40.051 nvme0n1 00:13:40.051 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:40.051 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.051 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.051 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.051 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:40.051 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:40.051 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:40.051 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:40.051 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.051 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:40.051 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.051 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:40.051 08:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:40.987 request: 00:13:40.987 { 00:13:40.987 "name": "nvme0", 00:13:40.987 "dhchap_key": "key2", 00:13:40.987 "dhchap_ctrlr_key": "key0", 00:13:40.987 "method": "bdev_nvme_set_keys", 00:13:40.987 "req_id": 1 00:13:40.987 } 00:13:40.987 Got JSON-RPC error response 00:13:40.987 response: 00:13:40.987 { 00:13:40.987 "code": -13, 00:13:40.987 "message": "Permission denied" 00:13:40.987 } 00:13:40.987 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:40.987 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:40.987 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:40.987 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:40.987 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:40.987 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.987 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:41.246 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:13:41.246 08:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:13:42.206 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:42.206 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:42.206 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.466 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:13:42.466 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:13:42.466 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:13:42.466 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67713 00:13:42.466 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 67713 ']' 00:13:42.466 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 67713 00:13:42.466 08:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:42.466 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:42.466 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67713 00:13:42.466 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:42.466 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:42.466 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67713' 00:13:42.466 killing process with pid 67713 00:13:42.466 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 67713 00:13:42.467 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 67713 00:13:43.034 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:43.034 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:43.034 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:13:43.034 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:43.034 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:13:43.034 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:43.034 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:43.034 rmmod nvme_tcp 00:13:43.034 rmmod nvme_fabrics 00:13:43.034 rmmod nvme_keyring 00:13:43.034 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:43.034 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:13:43.034 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:13:43.034 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 70842 ']' 00:13:43.034 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 70842 00:13:43.034 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 70842 ']' 00:13:43.034 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 70842 00:13:43.034 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:43.034 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:43.034 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70842 00:13:43.034 killing process with pid 70842 00:13:43.034 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:43.034 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:43.034 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70842' 00:13:43.034 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 70842 00:13:43.034 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 70842 00:13:43.293 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:43.293 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:43.293 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:43.293 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:13:43.293 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:13:43.293 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:13:43.293 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:43.293 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:43.293 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:43.293 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:43.293 08:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:43.293 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.76F /tmp/spdk.key-sha256.faq /tmp/spdk.key-sha384.6Ev /tmp/spdk.key-sha512.86p /tmp/spdk.key-sha512.PB9 /tmp/spdk.key-sha384.6Ox /tmp/spdk.key-sha256.XLg '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:43.552 00:13:43.552 real 3m20.607s 00:13:43.552 user 8m0.264s 00:13:43.552 sys 0m31.895s 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.552 ************************************ 00:13:43.552 END TEST nvmf_auth_target 00:13:43.552 ************************************ 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:43.552 ************************************ 00:13:43.552 START TEST nvmf_bdevio_no_huge 00:13:43.552 ************************************ 00:13:43.552 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:43.812 * Looking for test storage... 00:13:43.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:43.812 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:43.812 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:43.812 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:13:43.812 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:43.812 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:43.812 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:43.812 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:43.812 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:13:43.812 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:13:43.812 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:13:43.812 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:13:43.812 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:13:43.812 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:13:43.812 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:13:43.812 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:43.812 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:13:43.812 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:13:43.812 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:43.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.813 --rc genhtml_branch_coverage=1 00:13:43.813 --rc genhtml_function_coverage=1 00:13:43.813 --rc genhtml_legend=1 00:13:43.813 --rc geninfo_all_blocks=1 00:13:43.813 --rc geninfo_unexecuted_blocks=1 00:13:43.813 00:13:43.813 ' 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:43.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.813 --rc genhtml_branch_coverage=1 00:13:43.813 --rc genhtml_function_coverage=1 00:13:43.813 --rc genhtml_legend=1 00:13:43.813 --rc geninfo_all_blocks=1 00:13:43.813 --rc geninfo_unexecuted_blocks=1 00:13:43.813 00:13:43.813 ' 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:43.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.813 --rc genhtml_branch_coverage=1 00:13:43.813 --rc genhtml_function_coverage=1 00:13:43.813 --rc genhtml_legend=1 00:13:43.813 --rc geninfo_all_blocks=1 00:13:43.813 --rc geninfo_unexecuted_blocks=1 00:13:43.813 00:13:43.813 ' 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:43.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.813 --rc genhtml_branch_coverage=1 00:13:43.813 --rc genhtml_function_coverage=1 00:13:43.813 --rc genhtml_legend=1 00:13:43.813 --rc geninfo_all_blocks=1 00:13:43.813 --rc geninfo_unexecuted_blocks=1 00:13:43.813 00:13:43.813 ' 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:43.813 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@458 -- # nvmf_veth_init 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:43.813 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.814 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:43.814 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:43.814 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:43.814 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:43.814 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:43.814 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:43.814 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.814 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:43.814 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:43.814 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:43.814 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:43.814 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:43.814 Cannot find device "nvmf_init_br" 00:13:43.814 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:43.814 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:43.814 Cannot find device "nvmf_init_br2" 00:13:43.814 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:43.814 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:44.073 Cannot find device "nvmf_tgt_br" 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:44.073 Cannot find device "nvmf_tgt_br2" 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:44.073 Cannot find device "nvmf_init_br" 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:44.073 Cannot find device "nvmf_init_br2" 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:44.073 Cannot find device "nvmf_tgt_br" 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:44.073 Cannot find device "nvmf_tgt_br2" 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:44.073 Cannot find device "nvmf_br" 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:44.073 Cannot find device "nvmf_init_if" 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:44.073 Cannot find device "nvmf_init_if2" 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:44.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:44.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:44.073 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:44.332 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:44.332 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:13:44.332 00:13:44.332 --- 10.0.0.3 ping statistics --- 00:13:44.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.332 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:44.332 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:44.332 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:13:44.332 00:13:44.332 --- 10.0.0.4 ping statistics --- 00:13:44.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.332 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:44.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:13:44.332 00:13:44.332 --- 10.0.0.1 ping statistics --- 00:13:44.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.332 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:44.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:13:44.332 00:13:44.332 --- 10.0.0.2 ping statistics --- 00:13:44.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.332 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # return 0 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=71507 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 71507 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 71507 ']' 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:44.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:44.332 08:24:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:44.332 [2024-10-15 08:24:45.962613] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:13:44.332 [2024-10-15 08:24:45.962720] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:44.591 [2024-10-15 08:24:46.119011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:44.591 [2024-10-15 08:24:46.261144] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.591 [2024-10-15 08:24:46.261217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.591 [2024-10-15 08:24:46.261231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.591 [2024-10-15 08:24:46.261242] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.591 [2024-10-15 08:24:46.261251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.591 [2024-10-15 08:24:46.262221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:44.591 [2024-10-15 08:24:46.262378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:44.591 [2024-10-15 08:24:46.262501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:44.591 [2024-10-15 08:24:46.264107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:44.591 [2024-10-15 08:24:46.270559] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:45.541 [2024-10-15 08:24:47.087088] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:45.541 Malloc0 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:45.541 [2024-10-15 08:24:47.173636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:13:45.541 { 00:13:45.541 "params": { 00:13:45.541 "name": "Nvme$subsystem", 00:13:45.541 "trtype": "$TEST_TRANSPORT", 00:13:45.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:45.541 "adrfam": "ipv4", 00:13:45.541 "trsvcid": "$NVMF_PORT", 00:13:45.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:45.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:45.541 "hdgst": ${hdgst:-false}, 00:13:45.541 "ddgst": ${ddgst:-false} 00:13:45.541 }, 00:13:45.541 "method": "bdev_nvme_attach_controller" 00:13:45.541 } 00:13:45.541 EOF 00:13:45.541 )") 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:13:45.541 08:24:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:13:45.541 "params": { 00:13:45.541 "name": "Nvme1", 00:13:45.541 "trtype": "tcp", 00:13:45.541 "traddr": "10.0.0.3", 00:13:45.541 "adrfam": "ipv4", 00:13:45.541 "trsvcid": "4420", 00:13:45.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:45.541 "hdgst": false, 00:13:45.541 "ddgst": false 00:13:45.541 }, 00:13:45.541 "method": "bdev_nvme_attach_controller" 00:13:45.541 }' 00:13:45.541 [2024-10-15 08:24:47.232423] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:13:45.541 [2024-10-15 08:24:47.232510] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71547 ] 00:13:45.799 [2024-10-15 08:24:47.371409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:45.799 [2024-10-15 08:24:47.464022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.799 [2024-10-15 08:24:47.464134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.799 [2024-10-15 08:24:47.464141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.799 [2024-10-15 08:24:47.478531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:46.057 I/O targets: 00:13:46.057 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:46.057 00:13:46.057 00:13:46.057 CUnit - A unit testing framework for C - Version 2.1-3 00:13:46.057 http://cunit.sourceforge.net/ 00:13:46.057 00:13:46.057 00:13:46.057 Suite: bdevio tests on: Nvme1n1 00:13:46.057 Test: blockdev write read block ...passed 00:13:46.057 Test: blockdev write zeroes read block ...passed 00:13:46.057 Test: blockdev write zeroes read no split ...passed 00:13:46.057 Test: blockdev write zeroes read split ...passed 00:13:46.057 Test: blockdev write zeroes read split partial ...passed 00:13:46.057 Test: blockdev reset ...[2024-10-15 08:24:47.738620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:46.057 [2024-10-15 08:24:47.738767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193b720 (9): Bad file descriptor 00:13:46.314 [2024-10-15 08:24:47.834967] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:46.314 passed 00:13:46.314 Test: blockdev write read 8 blocks ...passed 00:13:46.314 Test: blockdev write read size > 128k ...passed 00:13:46.314 Test: blockdev write read invalid size ...passed 00:13:46.314 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:46.314 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:46.314 Test: blockdev write read max offset ...passed 00:13:46.314 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:46.314 Test: blockdev writev readv 8 blocks ...passed 00:13:46.314 Test: blockdev writev readv 30 x 1block ...passed 00:13:46.314 Test: blockdev writev readv block ...passed 00:13:46.314 Test: blockdev writev readv size > 128k ...passed 00:13:46.314 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:46.314 Test: blockdev comparev and writev ...[2024-10-15 08:24:47.845049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.314 [2024-10-15 08:24:47.845089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:46.314 [2024-10-15 08:24:47.845109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.314 [2024-10-15 08:24:47.845132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:46.314 passed 00:13:46.314 Test: blockdev nvme passthru rw ...[2024-10-15 08:24:47.845748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.314 [2024-10-15 08:24:47.845771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:46.314 [2024-10-15 08:24:47.845788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.314 [2024-10-15 08:24:47.845799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:46.314 [2024-10-15 08:24:47.846172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.314 [2024-10-15 08:24:47.846204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:46.314 [2024-10-15 08:24:47.846222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.314 [2024-10-15 08:24:47.846233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:46.314 [2024-10-15 08:24:47.846610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.314 [2024-10-15 08:24:47.846632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:46.314 [2024-10-15 08:24:47.846649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.314 [2024-10-15 08:24:47.846660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:46.314 passed 00:13:46.314 Test: blockdev nvme passthru vendor specific ...passed 00:13:46.314 Test: blockdev nvme admin passthru ...[2024-10-15 08:24:47.847470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.314 [2024-10-15 08:24:47.847494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:46.314 [2024-10-15 08:24:47.847601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.314 [2024-10-15 08:24:47.847617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:46.314 [2024-10-15 08:24:47.847728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.314 [2024-10-15 08:24:47.847749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:46.314 [2024-10-15 08:24:47.847869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.314 [2024-10-15 08:24:47.847890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:46.314 passed 00:13:46.314 Test: blockdev copy ...passed 00:13:46.314 00:13:46.314 Run Summary: Type Total Ran Passed Failed Inactive 00:13:46.314 suites 1 1 n/a 0 0 00:13:46.314 tests 23 23 23 0 0 00:13:46.314 asserts 152 152 152 0 n/a 00:13:46.314 00:13:46.314 Elapsed time = 0.327 seconds 00:13:46.572 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:46.572 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.572 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:46.572 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.572 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:46.572 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:46.572 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:46.572 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:13:46.830 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:46.830 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:13:46.830 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:46.830 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:46.830 rmmod nvme_tcp 00:13:46.830 rmmod nvme_fabrics 00:13:46.830 rmmod nvme_keyring 00:13:46.830 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:46.830 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:13:46.830 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:13:46.830 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 71507 ']' 00:13:46.830 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 71507 00:13:46.830 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 71507 ']' 00:13:46.830 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 71507 00:13:46.830 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:13:46.830 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:46.830 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71507 00:13:46.830 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:13:46.830 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:13:46.830 killing process with pid 71507 00:13:46.830 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71507' 00:13:46.830 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 71507 00:13:46.830 08:24:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 71507 00:13:47.765 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:47.765 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:47.765 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:47.765 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:13:47.765 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:13:47.765 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:47.765 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:13:47.765 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:47.765 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:47.765 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:47.765 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:47.765 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:47.765 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:47.765 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:47.765 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:47.765 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:48.024 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:48.024 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:48.024 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:48.024 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:48.024 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:48.024 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:48.024 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:48.024 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.024 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.024 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.024 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:13:48.024 00:13:48.024 real 0m4.427s 00:13:48.024 user 0m11.120s 00:13:48.024 sys 0m1.911s 00:13:48.024 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:48.024 ************************************ 00:13:48.024 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:48.024 END TEST nvmf_bdevio_no_huge 00:13:48.024 ************************************ 00:13:48.024 08:24:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:48.024 08:24:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:48.024 08:24:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.024 08:24:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:48.024 ************************************ 00:13:48.024 START TEST nvmf_tls 00:13:48.024 ************************************ 00:13:48.024 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:48.284 * Looking for test storage... 00:13:48.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:48.284 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:48.284 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:13:48.284 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:48.284 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:48.284 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:48.284 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:48.284 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:48.284 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.284 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:13:48.284 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:13:48.284 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:13:48.284 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:13:48.284 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:13:48.284 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:13:48.284 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:48.284 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:13:48.284 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:13:48.284 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:48.284 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.284 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:48.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.285 --rc genhtml_branch_coverage=1 00:13:48.285 --rc genhtml_function_coverage=1 00:13:48.285 --rc genhtml_legend=1 00:13:48.285 --rc geninfo_all_blocks=1 00:13:48.285 --rc geninfo_unexecuted_blocks=1 00:13:48.285 00:13:48.285 ' 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:48.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.285 --rc genhtml_branch_coverage=1 00:13:48.285 --rc genhtml_function_coverage=1 00:13:48.285 --rc genhtml_legend=1 00:13:48.285 --rc geninfo_all_blocks=1 00:13:48.285 --rc geninfo_unexecuted_blocks=1 00:13:48.285 00:13:48.285 ' 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:48.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.285 --rc genhtml_branch_coverage=1 00:13:48.285 --rc genhtml_function_coverage=1 00:13:48.285 --rc genhtml_legend=1 00:13:48.285 --rc geninfo_all_blocks=1 00:13:48.285 --rc geninfo_unexecuted_blocks=1 00:13:48.285 00:13:48.285 ' 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:48.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.285 --rc genhtml_branch_coverage=1 00:13:48.285 --rc genhtml_function_coverage=1 00:13:48.285 --rc genhtml_legend=1 00:13:48.285 --rc geninfo_all_blocks=1 00:13:48.285 --rc geninfo_unexecuted_blocks=1 00:13:48.285 00:13:48.285 ' 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:48.285 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:48.285 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@458 -- # nvmf_veth_init 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:48.286 Cannot find device "nvmf_init_br" 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:48.286 08:24:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:48.286 Cannot find device "nvmf_init_br2" 00:13:48.286 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:48.286 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:48.545 Cannot find device "nvmf_tgt_br" 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:48.545 Cannot find device "nvmf_tgt_br2" 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:48.545 Cannot find device "nvmf_init_br" 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:48.545 Cannot find device "nvmf_init_br2" 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:48.545 Cannot find device "nvmf_tgt_br" 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:48.545 Cannot find device "nvmf_tgt_br2" 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:48.545 Cannot find device "nvmf_br" 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:48.545 Cannot find device "nvmf_init_if" 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:48.545 Cannot find device "nvmf_init_if2" 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:48.545 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:48.545 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:48.545 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:48.805 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:48.805 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:13:48.805 00:13:48.805 --- 10.0.0.3 ping statistics --- 00:13:48.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.805 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:48.805 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:48.805 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:13:48.805 00:13:48.805 --- 10.0.0.4 ping statistics --- 00:13:48.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.805 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:48.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:13:48.805 00:13:48.805 --- 10.0.0.1 ping statistics --- 00:13:48.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.805 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:48.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:13:48.805 00:13:48.805 --- 10.0.0.2 ping statistics --- 00:13:48.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.805 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # return 0 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=71796 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 71796 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 71796 ']' 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:48.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:48.805 08:24:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:48.805 [2024-10-15 08:24:50.468509] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:13:48.805 [2024-10-15 08:24:50.468619] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.064 [2024-10-15 08:24:50.613768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.064 [2024-10-15 08:24:50.700186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.064 [2024-10-15 08:24:50.700244] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.064 [2024-10-15 08:24:50.700259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.064 [2024-10-15 08:24:50.700269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.064 [2024-10-15 08:24:50.700279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.064 [2024-10-15 08:24:50.700825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.035 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:50.035 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:50.035 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:50.035 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:50.035 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:50.035 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.035 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:13:50.035 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:50.294 true 00:13:50.294 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:50.294 08:24:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:13:50.552 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:13:50.552 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:13:50.552 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:50.810 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:50.810 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:13:51.068 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:13:51.068 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:13:51.068 08:24:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:51.635 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:13:51.635 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:51.894 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:13:51.894 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:13:51.894 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:51.894 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:13:52.153 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:13:52.153 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:13:52.153 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:52.411 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:52.411 08:24:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:13:52.670 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:13:52.670 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:13:52.670 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:52.975 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:52.975 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.2C2HCzdWcZ 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.1EuBd6YXwq 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.2C2HCzdWcZ 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.1EuBd6YXwq 00:13:53.234 08:24:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:53.493 08:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:53.750 [2024-10-15 08:24:55.424513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:54.008 08:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.2C2HCzdWcZ 00:13:54.008 08:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2C2HCzdWcZ 00:13:54.008 08:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:54.008 [2024-10-15 08:24:55.737411] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.267 08:24:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:54.526 08:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:54.785 [2024-10-15 08:24:56.349561] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:54.785 [2024-10-15 08:24:56.349875] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:54.785 08:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:55.044 malloc0 00:13:55.044 08:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:55.303 08:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2C2HCzdWcZ 00:13:55.562 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:55.822 08:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.2C2HCzdWcZ 00:14:08.027 Initializing NVMe Controllers 00:14:08.027 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:08.027 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:08.027 Initialization complete. Launching workers. 00:14:08.027 ======================================================== 00:14:08.027 Latency(us) 00:14:08.027 Device Information : IOPS MiB/s Average min max 00:14:08.027 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9751.47 38.09 6564.68 1625.03 13254.50 00:14:08.027 ======================================================== 00:14:08.027 Total : 9751.47 38.09 6564.68 1625.03 13254.50 00:14:08.027 00:14:08.027 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2C2HCzdWcZ 00:14:08.027 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:08.027 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:08.027 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:08.027 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2C2HCzdWcZ 00:14:08.027 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:08.027 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72040 00:14:08.027 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:08.027 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72040 /var/tmp/bdevperf.sock 00:14:08.027 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72040 ']' 00:14:08.027 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:08.027 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:08.027 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:08.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:08.027 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:08.027 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:08.027 08:25:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.027 [2024-10-15 08:25:07.757679] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:14:08.027 [2024-10-15 08:25:07.757812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72040 ] 00:14:08.027 [2024-10-15 08:25:07.899066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.027 [2024-10-15 08:25:07.976724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:08.027 [2024-10-15 08:25:08.050003] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:08.027 08:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:08.027 08:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:08.027 08:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2C2HCzdWcZ 00:14:08.027 08:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:08.027 [2024-10-15 08:25:08.661789] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:08.027 TLSTESTn1 00:14:08.027 08:25:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:08.027 Running I/O for 10 seconds... 00:14:09.221 3948.00 IOPS, 15.42 MiB/s [2024-10-15T08:25:11.886Z] 3988.00 IOPS, 15.58 MiB/s [2024-10-15T08:25:13.262Z] 3966.67 IOPS, 15.49 MiB/s [2024-10-15T08:25:14.198Z] 3969.25 IOPS, 15.50 MiB/s [2024-10-15T08:25:15.133Z] 3969.20 IOPS, 15.50 MiB/s [2024-10-15T08:25:16.068Z] 3974.67 IOPS, 15.53 MiB/s [2024-10-15T08:25:17.002Z] 3980.43 IOPS, 15.55 MiB/s [2024-10-15T08:25:17.936Z] 3985.12 IOPS, 15.57 MiB/s [2024-10-15T08:25:18.869Z] 3992.11 IOPS, 15.59 MiB/s [2024-10-15T08:25:19.127Z] 3994.40 IOPS, 15.60 MiB/s 00:14:17.396 Latency(us) 00:14:17.396 [2024-10-15T08:25:19.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.396 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:17.396 Verification LBA range: start 0x0 length 0x2000 00:14:17.396 TLSTESTn1 : 10.02 3999.66 15.62 0.00 0.00 31941.99 6821.70 29312.47 00:14:17.396 [2024-10-15T08:25:19.127Z] =================================================================================================================== 00:14:17.396 [2024-10-15T08:25:19.127Z] Total : 3999.66 15.62 0.00 0.00 31941.99 6821.70 29312.47 00:14:17.396 { 00:14:17.396 "results": [ 00:14:17.396 { 00:14:17.396 "job": "TLSTESTn1", 00:14:17.396 "core_mask": "0x4", 00:14:17.396 "workload": "verify", 00:14:17.396 "status": "finished", 00:14:17.396 "verify_range": { 00:14:17.396 "start": 0, 00:14:17.396 "length": 8192 00:14:17.396 }, 00:14:17.396 "queue_depth": 128, 00:14:17.396 "io_size": 4096, 00:14:17.396 "runtime": 10.018604, 00:14:17.396 "iops": 3999.659034332528, 00:14:17.396 "mibps": 15.623668102861437, 00:14:17.396 "io_failed": 0, 00:14:17.396 "io_timeout": 0, 00:14:17.396 "avg_latency_us": 31941.989641840282, 00:14:17.396 "min_latency_us": 6821.701818181818, 00:14:17.396 "max_latency_us": 29312.465454545454 00:14:17.396 } 00:14:17.396 ], 00:14:17.396 "core_count": 1 00:14:17.396 } 00:14:17.396 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:17.396 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 72040 00:14:17.396 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72040 ']' 00:14:17.396 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72040 00:14:17.396 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:17.396 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:17.396 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72040 00:14:17.396 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:17.396 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:17.396 killing process with pid 72040 00:14:17.396 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72040' 00:14:17.396 Received shutdown signal, test time was about 10.000000 seconds 00:14:17.396 00:14:17.396 Latency(us) 00:14:17.396 [2024-10-15T08:25:19.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.396 [2024-10-15T08:25:19.127Z] =================================================================================================================== 00:14:17.396 [2024-10-15T08:25:19.127Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:17.396 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72040 00:14:17.396 08:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72040 00:14:17.654 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1EuBd6YXwq 00:14:17.654 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:17.654 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1EuBd6YXwq 00:14:17.654 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:17.654 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.654 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:17.654 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.654 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1EuBd6YXwq 00:14:17.654 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:17.654 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:17.654 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:17.654 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1EuBd6YXwq 00:14:17.654 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:17.654 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72174 00:14:17.654 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:17.655 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:17.655 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72174 /var/tmp/bdevperf.sock 00:14:17.655 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72174 ']' 00:14:17.655 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:17.655 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:17.655 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:17.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:17.655 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:17.655 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.655 [2024-10-15 08:25:19.255255] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:14:17.655 [2024-10-15 08:25:19.255349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72174 ] 00:14:17.941 [2024-10-15 08:25:19.397657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.941 [2024-10-15 08:25:19.481834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.941 [2024-10-15 08:25:19.556605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:17.941 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:17.941 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:17.941 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1EuBd6YXwq 00:14:18.200 08:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:18.459 [2024-10-15 08:25:20.180482] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:18.459 [2024-10-15 08:25:20.188151] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:18.459 [2024-10-15 08:25:20.188500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7b090 (107): Transport endpoint is not connected 00:14:18.459 [2024-10-15 08:25:20.189489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7b090 (9): Bad file descriptor 00:14:18.718 [2024-10-15 08:25:20.190485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:18.718 [2024-10-15 08:25:20.190508] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:18.718 [2024-10-15 08:25:20.190520] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:18.718 [2024-10-15 08:25:20.190532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:18.718 request: 00:14:18.718 { 00:14:18.718 "name": "TLSTEST", 00:14:18.718 "trtype": "tcp", 00:14:18.718 "traddr": "10.0.0.3", 00:14:18.718 "adrfam": "ipv4", 00:14:18.718 "trsvcid": "4420", 00:14:18.718 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:18.718 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:18.718 "prchk_reftag": false, 00:14:18.718 "prchk_guard": false, 00:14:18.718 "hdgst": false, 00:14:18.718 "ddgst": false, 00:14:18.718 "psk": "key0", 00:14:18.718 "allow_unrecognized_csi": false, 00:14:18.718 "method": "bdev_nvme_attach_controller", 00:14:18.718 "req_id": 1 00:14:18.718 } 00:14:18.718 Got JSON-RPC error response 00:14:18.718 response: 00:14:18.718 { 00:14:18.718 "code": -5, 00:14:18.718 "message": "Input/output error" 00:14:18.718 } 00:14:18.718 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72174 00:14:18.718 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72174 ']' 00:14:18.718 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72174 00:14:18.718 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:18.718 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:18.718 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72174 00:14:18.718 killing process with pid 72174 00:14:18.718 Received shutdown signal, test time was about 10.000000 seconds 00:14:18.718 00:14:18.718 Latency(us) 00:14:18.718 [2024-10-15T08:25:20.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.718 [2024-10-15T08:25:20.449Z] =================================================================================================================== 00:14:18.718 [2024-10-15T08:25:20.449Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:18.718 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:18.718 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:18.718 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72174' 00:14:18.718 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72174 00:14:18.718 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72174 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2C2HCzdWcZ 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2C2HCzdWcZ 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2C2HCzdWcZ 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2C2HCzdWcZ 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72198 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72198 /var/tmp/bdevperf.sock 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72198 ']' 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:18.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.977 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.977 [2024-10-15 08:25:20.560527] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:14:18.977 [2024-10-15 08:25:20.560653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72198 ] 00:14:18.977 [2024-10-15 08:25:20.698516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.236 [2024-10-15 08:25:20.775357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.236 [2024-10-15 08:25:20.850480] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:19.236 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:19.236 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:19.236 08:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2C2HCzdWcZ 00:14:19.494 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:19.752 [2024-10-15 08:25:21.428645] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:19.752 [2024-10-15 08:25:21.438309] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:19.752 [2024-10-15 08:25:21.438370] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:19.752 [2024-10-15 08:25:21.438421] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:19.752 [2024-10-15 08:25:21.438530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd9090 (107): Transport endpoint is not connected 00:14:19.752 [2024-10-15 08:25:21.439519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd9090 (9): Bad file descriptor 00:14:19.752 [2024-10-15 08:25:21.440516] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:19.752 [2024-10-15 08:25:21.440541] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:19.752 [2024-10-15 08:25:21.440553] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:19.752 [2024-10-15 08:25:21.440565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:19.752 request: 00:14:19.752 { 00:14:19.752 "name": "TLSTEST", 00:14:19.752 "trtype": "tcp", 00:14:19.752 "traddr": "10.0.0.3", 00:14:19.752 "adrfam": "ipv4", 00:14:19.752 "trsvcid": "4420", 00:14:19.752 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.752 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:19.752 "prchk_reftag": false, 00:14:19.752 "prchk_guard": false, 00:14:19.752 "hdgst": false, 00:14:19.752 "ddgst": false, 00:14:19.752 "psk": "key0", 00:14:19.752 "allow_unrecognized_csi": false, 00:14:19.752 "method": "bdev_nvme_attach_controller", 00:14:19.752 "req_id": 1 00:14:19.752 } 00:14:19.752 Got JSON-RPC error response 00:14:19.752 response: 00:14:19.752 { 00:14:19.752 "code": -5, 00:14:19.752 "message": "Input/output error" 00:14:19.752 } 00:14:19.752 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72198 00:14:19.752 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72198 ']' 00:14:19.752 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72198 00:14:19.753 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:19.753 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:19.753 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72198 00:14:20.010 killing process with pid 72198 00:14:20.010 Received shutdown signal, test time was about 10.000000 seconds 00:14:20.010 00:14:20.010 Latency(us) 00:14:20.010 [2024-10-15T08:25:21.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.010 [2024-10-15T08:25:21.741Z] =================================================================================================================== 00:14:20.010 [2024-10-15T08:25:21.741Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:20.010 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:20.010 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:20.010 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72198' 00:14:20.010 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72198 00:14:20.010 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72198 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2C2HCzdWcZ 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2C2HCzdWcZ 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:20.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2C2HCzdWcZ 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2C2HCzdWcZ 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72225 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72225 /var/tmp/bdevperf.sock 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72225 ']' 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:20.269 08:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.269 [2024-10-15 08:25:21.814265] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:14:20.269 [2024-10-15 08:25:21.814383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72225 ] 00:14:20.269 [2024-10-15 08:25:21.947400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.570 [2024-10-15 08:25:22.026817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.570 [2024-10-15 08:25:22.102085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:21.186 08:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:21.186 08:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:21.186 08:25:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2C2HCzdWcZ 00:14:21.445 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:21.703 [2024-10-15 08:25:23.390464] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:21.703 [2024-10-15 08:25:23.398627] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:21.703 [2024-10-15 08:25:23.398681] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:21.703 [2024-10-15 08:25:23.398732] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:21.703 [2024-10-15 08:25:23.399420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1597090 (107): Transport endpoint is not connected 00:14:21.703 [2024-10-15 08:25:23.400409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1597090 (9): Bad file descriptor 00:14:21.703 request: 00:14:21.703 { 00:14:21.703 "name": "TLSTEST", 00:14:21.703 "trtype": "tcp", 00:14:21.703 "traddr": "10.0.0.3", 00:14:21.703 "adrfam": "ipv4", 00:14:21.703 "trsvcid": "4420", 00:14:21.703 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:21.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:21.703 "prchk_reftag": false, 00:14:21.703 "prchk_guard": false, 00:14:21.703 "hdgst": false, 00:14:21.703 "ddgst": false, 00:14:21.703 "psk": "key0", 00:14:21.703 "allow_unrecognized_csi": false, 00:14:21.703 "method": "bdev_nvme_attach_controller", 00:14:21.703 "req_id": 1 00:14:21.703 } 00:14:21.703 Got JSON-RPC error response 00:14:21.703 response: 00:14:21.703 { 00:14:21.703 "code": -5, 00:14:21.703 "message": "Input/output error" 00:14:21.703 } 00:14:21.703 [2024-10-15 08:25:23.401405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:21.703 [2024-10-15 08:25:23.401427] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:21.703 [2024-10-15 08:25:23.401437] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:21.703 [2024-10-15 08:25:23.401450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:21.962 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72225 00:14:21.962 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72225 ']' 00:14:21.962 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72225 00:14:21.962 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:21.962 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:21.962 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72225 00:14:21.962 killing process with pid 72225 00:14:21.962 Received shutdown signal, test time was about 10.000000 seconds 00:14:21.962 00:14:21.962 Latency(us) 00:14:21.962 [2024-10-15T08:25:23.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.962 [2024-10-15T08:25:23.693Z] =================================================================================================================== 00:14:21.962 [2024-10-15T08:25:23.693Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:21.962 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:21.962 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:21.962 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72225' 00:14:21.962 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72225 00:14:21.962 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72225 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72259 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72259 /var/tmp/bdevperf.sock 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72259 ']' 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:22.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:22.222 08:25:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.222 [2024-10-15 08:25:23.791567] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:14:22.222 [2024-10-15 08:25:23.791679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72259 ] 00:14:22.222 [2024-10-15 08:25:23.933680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.481 [2024-10-15 08:25:24.015864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.481 [2024-10-15 08:25:24.094048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:22.481 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:22.481 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:22.481 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:22.739 [2024-10-15 08:25:24.440622] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:22.739 [2024-10-15 08:25:24.440692] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:22.739 request: 00:14:22.739 { 00:14:22.739 "name": "key0", 00:14:22.739 "path": "", 00:14:22.739 "method": "keyring_file_add_key", 00:14:22.739 "req_id": 1 00:14:22.739 } 00:14:22.739 Got JSON-RPC error response 00:14:22.739 response: 00:14:22.739 { 00:14:22.739 "code": -1, 00:14:22.739 "message": "Operation not permitted" 00:14:22.739 } 00:14:22.739 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:23.307 [2024-10-15 08:25:24.732828] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:23.307 [2024-10-15 08:25:24.732913] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:23.307 request: 00:14:23.307 { 00:14:23.307 "name": "TLSTEST", 00:14:23.307 "trtype": "tcp", 00:14:23.307 "traddr": "10.0.0.3", 00:14:23.307 "adrfam": "ipv4", 00:14:23.307 "trsvcid": "4420", 00:14:23.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:23.307 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:23.307 "prchk_reftag": false, 00:14:23.307 "prchk_guard": false, 00:14:23.307 "hdgst": false, 00:14:23.307 "ddgst": false, 00:14:23.307 "psk": "key0", 00:14:23.307 "allow_unrecognized_csi": false, 00:14:23.307 "method": "bdev_nvme_attach_controller", 00:14:23.307 "req_id": 1 00:14:23.307 } 00:14:23.307 Got JSON-RPC error response 00:14:23.307 response: 00:14:23.307 { 00:14:23.307 "code": -126, 00:14:23.307 "message": "Required key not available" 00:14:23.307 } 00:14:23.307 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72259 00:14:23.307 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72259 ']' 00:14:23.307 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72259 00:14:23.307 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:23.307 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:23.307 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72259 00:14:23.307 killing process with pid 72259 00:14:23.307 Received shutdown signal, test time was about 10.000000 seconds 00:14:23.307 00:14:23.307 Latency(us) 00:14:23.307 [2024-10-15T08:25:25.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.307 [2024-10-15T08:25:25.038Z] =================================================================================================================== 00:14:23.307 [2024-10-15T08:25:25.038Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:23.307 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:23.307 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:23.307 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72259' 00:14:23.307 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72259 00:14:23.307 08:25:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72259 00:14:23.566 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:23.566 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:23.566 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:23.566 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:23.566 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:23.566 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71796 00:14:23.566 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 71796 ']' 00:14:23.566 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 71796 00:14:23.566 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:23.566 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:23.566 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71796 00:14:23.566 killing process with pid 71796 00:14:23.566 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:23.566 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:23.566 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71796' 00:14:23.566 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 71796 00:14:23.566 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 71796 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.KYjIAIRN2h 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.KYjIAIRN2h 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72290 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72290 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72290 ']' 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:23.827 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.827 [2024-10-15 08:25:25.526014] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:14:23.827 [2024-10-15 08:25:25.526155] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.093 [2024-10-15 08:25:25.667631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.093 [2024-10-15 08:25:25.743975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.093 [2024-10-15 08:25:25.744040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.093 [2024-10-15 08:25:25.744053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:24.093 [2024-10-15 08:25:25.744062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:24.093 [2024-10-15 08:25:25.744070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.093 [2024-10-15 08:25:25.744542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.093 [2024-10-15 08:25:25.818138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:24.353 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:24.353 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:24.353 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:24.353 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:24.353 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:24.353 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.353 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.KYjIAIRN2h 00:14:24.353 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KYjIAIRN2h 00:14:24.353 08:25:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:24.613 [2024-10-15 08:25:26.233888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.613 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:24.872 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:25.130 [2024-10-15 08:25:26.822156] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:25.130 [2024-10-15 08:25:26.822649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:25.130 08:25:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:25.697 malloc0 00:14:25.698 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:25.698 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KYjIAIRN2h 00:14:25.956 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:26.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:26.523 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KYjIAIRN2h 00:14:26.523 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:26.523 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:26.523 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:26.523 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KYjIAIRN2h 00:14:26.523 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:26.523 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72344 00:14:26.523 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:26.523 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:26.523 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72344 /var/tmp/bdevperf.sock 00:14:26.523 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72344 ']' 00:14:26.523 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:26.523 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:26.523 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:26.523 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:26.523 08:25:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.523 [2024-10-15 08:25:28.011354] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:14:26.524 [2024-10-15 08:25:28.011704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72344 ] 00:14:26.524 [2024-10-15 08:25:28.150681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.524 [2024-10-15 08:25:28.236237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.782 [2024-10-15 08:25:28.311228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:26.782 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:26.782 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:26.782 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KYjIAIRN2h 00:14:27.041 08:25:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:27.299 [2024-10-15 08:25:28.912943] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:27.299 TLSTESTn1 00:14:27.299 08:25:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:27.558 Running I/O for 10 seconds... 00:14:29.495 3728.00 IOPS, 14.56 MiB/s [2024-10-15T08:25:32.160Z] 3846.00 IOPS, 15.02 MiB/s [2024-10-15T08:25:33.535Z] 3874.33 IOPS, 15.13 MiB/s [2024-10-15T08:25:34.471Z] 3877.75 IOPS, 15.15 MiB/s [2024-10-15T08:25:35.405Z] 3889.20 IOPS, 15.19 MiB/s [2024-10-15T08:25:36.357Z] 3898.33 IOPS, 15.23 MiB/s [2024-10-15T08:25:37.292Z] 3905.43 IOPS, 15.26 MiB/s [2024-10-15T08:25:38.225Z] 3913.75 IOPS, 15.29 MiB/s [2024-10-15T08:25:39.159Z] 3919.89 IOPS, 15.31 MiB/s [2024-10-15T08:25:39.159Z] 3922.70 IOPS, 15.32 MiB/s 00:14:37.428 Latency(us) 00:14:37.428 [2024-10-15T08:25:39.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.428 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:37.428 Verification LBA range: start 0x0 length 0x2000 00:14:37.428 TLSTESTn1 : 10.02 3928.35 15.35 0.00 0.00 32525.61 5868.45 24427.05 00:14:37.428 [2024-10-15T08:25:39.160Z] =================================================================================================================== 00:14:37.429 [2024-10-15T08:25:39.160Z] Total : 3928.35 15.35 0.00 0.00 32525.61 5868.45 24427.05 00:14:37.429 { 00:14:37.429 "results": [ 00:14:37.429 { 00:14:37.429 "job": "TLSTESTn1", 00:14:37.429 "core_mask": "0x4", 00:14:37.429 "workload": "verify", 00:14:37.429 "status": "finished", 00:14:37.429 "verify_range": { 00:14:37.429 "start": 0, 00:14:37.429 "length": 8192 00:14:37.429 }, 00:14:37.429 "queue_depth": 128, 00:14:37.429 "io_size": 4096, 00:14:37.429 "runtime": 10.017427, 00:14:37.429 "iops": 3928.3540573841965, 00:14:37.429 "mibps": 15.345133036657018, 00:14:37.429 "io_failed": 0, 00:14:37.429 "io_timeout": 0, 00:14:37.429 "avg_latency_us": 32525.60860781016, 00:14:37.429 "min_latency_us": 5868.450909090909, 00:14:37.429 "max_latency_us": 24427.054545454546 00:14:37.429 } 00:14:37.429 ], 00:14:37.429 "core_count": 1 00:14:37.429 } 00:14:37.688 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:37.688 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 72344 00:14:37.688 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72344 ']' 00:14:37.688 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72344 00:14:37.688 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:37.688 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:37.688 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72344 00:14:37.688 killing process with pid 72344 00:14:37.688 Received shutdown signal, test time was about 10.000000 seconds 00:14:37.688 00:14:37.688 Latency(us) 00:14:37.688 [2024-10-15T08:25:39.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.688 [2024-10-15T08:25:39.419Z] =================================================================================================================== 00:14:37.688 [2024-10-15T08:25:39.419Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:37.688 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:37.688 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:37.688 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72344' 00:14:37.688 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72344 00:14:37.688 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72344 00:14:37.946 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.KYjIAIRN2h 00:14:37.946 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KYjIAIRN2h 00:14:37.946 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:37.946 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KYjIAIRN2h 00:14:37.946 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:37.946 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:37.946 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:37.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:37.947 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:37.947 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KYjIAIRN2h 00:14:37.947 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:37.947 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:37.947 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:37.947 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KYjIAIRN2h 00:14:37.947 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:37.947 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72484 00:14:37.947 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:37.947 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72484 /var/tmp/bdevperf.sock 00:14:37.947 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:37.947 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72484 ']' 00:14:37.947 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:37.947 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:37.947 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:37.947 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:37.947 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.947 [2024-10-15 08:25:39.524027] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:14:37.947 [2024-10-15 08:25:39.524352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72484 ] 00:14:37.947 [2024-10-15 08:25:39.656184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.206 [2024-10-15 08:25:39.736613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.206 [2024-10-15 08:25:39.812619] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:38.206 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:38.206 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:38.206 08:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KYjIAIRN2h 00:14:38.464 [2024-10-15 08:25:40.169506] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.KYjIAIRN2h': 0100666 00:14:38.464 [2024-10-15 08:25:40.169951] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:38.464 request: 00:14:38.464 { 00:14:38.464 "name": "key0", 00:14:38.464 "path": "/tmp/tmp.KYjIAIRN2h", 00:14:38.464 "method": "keyring_file_add_key", 00:14:38.464 "req_id": 1 00:14:38.464 } 00:14:38.464 Got JSON-RPC error response 00:14:38.464 response: 00:14:38.464 { 00:14:38.464 "code": -1, 00:14:38.464 "message": "Operation not permitted" 00:14:38.464 } 00:14:38.723 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:38.982 [2024-10-15 08:25:40.461725] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:38.982 [2024-10-15 08:25:40.461830] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:38.982 request: 00:14:38.982 { 00:14:38.982 "name": "TLSTEST", 00:14:38.982 "trtype": "tcp", 00:14:38.982 "traddr": "10.0.0.3", 00:14:38.982 "adrfam": "ipv4", 00:14:38.982 "trsvcid": "4420", 00:14:38.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:38.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:38.982 "prchk_reftag": false, 00:14:38.982 "prchk_guard": false, 00:14:38.982 "hdgst": false, 00:14:38.982 "ddgst": false, 00:14:38.982 "psk": "key0", 00:14:38.982 "allow_unrecognized_csi": false, 00:14:38.982 "method": "bdev_nvme_attach_controller", 00:14:38.982 "req_id": 1 00:14:38.982 } 00:14:38.982 Got JSON-RPC error response 00:14:38.982 response: 00:14:38.982 { 00:14:38.982 "code": -126, 00:14:38.982 "message": "Required key not available" 00:14:38.982 } 00:14:38.982 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72484 00:14:38.982 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72484 ']' 00:14:38.982 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72484 00:14:38.982 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:38.982 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:38.982 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72484 00:14:38.982 killing process with pid 72484 00:14:38.982 Received shutdown signal, test time was about 10.000000 seconds 00:14:38.982 00:14:38.982 Latency(us) 00:14:38.982 [2024-10-15T08:25:40.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.982 [2024-10-15T08:25:40.713Z] =================================================================================================================== 00:14:38.982 [2024-10-15T08:25:40.713Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:38.982 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:38.982 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:38.982 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72484' 00:14:38.982 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72484 00:14:38.982 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72484 00:14:39.241 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:39.241 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:39.241 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:39.241 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:39.241 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:39.241 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 72290 00:14:39.241 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72290 ']' 00:14:39.241 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72290 00:14:39.241 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:39.241 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.241 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72290 00:14:39.241 killing process with pid 72290 00:14:39.241 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:39.241 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:39.241 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72290' 00:14:39.241 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72290 00:14:39.241 08:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72290 00:14:39.500 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:14:39.500 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:39.500 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:39.500 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.500 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72520 00:14:39.500 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72520 00:14:39.500 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72520 ']' 00:14:39.500 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.500 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:39.500 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.500 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:39.500 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:39.500 08:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.500 [2024-10-15 08:25:41.198328] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:14:39.500 [2024-10-15 08:25:41.198747] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.759 [2024-10-15 08:25:41.343413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.759 [2024-10-15 08:25:41.422609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.759 [2024-10-15 08:25:41.422946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.759 [2024-10-15 08:25:41.422968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.759 [2024-10-15 08:25:41.422977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.759 [2024-10-15 08:25:41.422987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.759 [2024-10-15 08:25:41.423500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.018 [2024-10-15 08:25:41.497692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:40.586 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:40.586 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:40.586 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:40.586 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:40.586 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.586 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.586 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.KYjIAIRN2h 00:14:40.586 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:40.586 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.KYjIAIRN2h 00:14:40.586 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:14:40.586 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:40.586 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:14:40.586 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:40.586 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.KYjIAIRN2h 00:14:40.586 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KYjIAIRN2h 00:14:40.586 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:40.845 [2024-10-15 08:25:42.554597] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.104 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:41.362 08:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:41.621 [2024-10-15 08:25:43.130799] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:41.621 [2024-10-15 08:25:43.131303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:41.621 08:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:41.879 malloc0 00:14:41.879 08:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:42.137 08:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KYjIAIRN2h 00:14:42.396 [2024-10-15 08:25:44.041899] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.KYjIAIRN2h': 0100666 00:14:42.396 [2024-10-15 08:25:44.041992] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:42.396 request: 00:14:42.396 { 00:14:42.396 "name": "key0", 00:14:42.396 "path": "/tmp/tmp.KYjIAIRN2h", 00:14:42.396 "method": "keyring_file_add_key", 00:14:42.396 "req_id": 1 00:14:42.396 } 00:14:42.396 Got JSON-RPC error response 00:14:42.396 response: 00:14:42.396 { 00:14:42.396 "code": -1, 00:14:42.396 "message": "Operation not permitted" 00:14:42.396 } 00:14:42.396 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:42.654 [2024-10-15 08:25:44.338020] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:14:42.654 [2024-10-15 08:25:44.338374] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:42.654 request: 00:14:42.654 { 00:14:42.654 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.654 "host": "nqn.2016-06.io.spdk:host1", 00:14:42.654 "psk": "key0", 00:14:42.654 "method": "nvmf_subsystem_add_host", 00:14:42.654 "req_id": 1 00:14:42.654 } 00:14:42.654 Got JSON-RPC error response 00:14:42.654 response: 00:14:42.654 { 00:14:42.654 "code": -32603, 00:14:42.654 "message": "Internal error" 00:14:42.654 } 00:14:42.654 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:42.654 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:42.654 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:42.654 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:42.654 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 72520 00:14:42.654 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72520 ']' 00:14:42.654 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72520 00:14:42.654 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:42.654 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:42.654 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72520 00:14:42.911 killing process with pid 72520 00:14:42.911 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:42.911 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:42.911 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72520' 00:14:42.911 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72520 00:14:42.911 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72520 00:14:43.168 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.KYjIAIRN2h 00:14:43.168 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:14:43.168 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:43.168 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:43.168 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.168 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72591 00:14:43.168 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:43.168 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72591 00:14:43.168 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72591 ']' 00:14:43.168 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.168 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:43.168 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.168 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:43.168 08:25:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.168 [2024-10-15 08:25:44.747051] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:14:43.168 [2024-10-15 08:25:44.747491] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.168 [2024-10-15 08:25:44.882358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.425 [2024-10-15 08:25:44.962231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.425 [2024-10-15 08:25:44.962301] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.425 [2024-10-15 08:25:44.962313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.425 [2024-10-15 08:25:44.962322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.425 [2024-10-15 08:25:44.962330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.425 [2024-10-15 08:25:44.962790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.425 [2024-10-15 08:25:45.037509] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:44.356 08:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:44.356 08:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:44.356 08:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:44.356 08:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:44.356 08:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.356 08:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.356 08:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.KYjIAIRN2h 00:14:44.356 08:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KYjIAIRN2h 00:14:44.356 08:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:44.613 [2024-10-15 08:25:46.193175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.613 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:44.870 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:45.128 [2024-10-15 08:25:46.761335] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:45.128 [2024-10-15 08:25:46.761635] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:45.128 08:25:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:45.387 malloc0 00:14:45.387 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:45.646 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KYjIAIRN2h 00:14:45.904 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:46.469 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72647 00:14:46.469 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:46.469 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:46.469 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72647 /var/tmp/bdevperf.sock 00:14:46.469 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72647 ']' 00:14:46.469 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:46.469 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:46.469 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:46.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:46.469 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:46.469 08:25:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.469 [2024-10-15 08:25:47.986316] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:14:46.469 [2024-10-15 08:25:47.986446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72647 ] 00:14:46.469 [2024-10-15 08:25:48.122499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.728 [2024-10-15 08:25:48.214461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.728 [2024-10-15 08:25:48.293013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:47.318 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:47.318 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:47.318 08:25:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KYjIAIRN2h 00:14:47.576 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:47.836 [2024-10-15 08:25:49.483910] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:47.836 TLSTESTn1 00:14:48.094 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:48.353 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:14:48.353 "subsystems": [ 00:14:48.353 { 00:14:48.353 "subsystem": "keyring", 00:14:48.353 "config": [ 00:14:48.353 { 00:14:48.353 "method": "keyring_file_add_key", 00:14:48.353 "params": { 00:14:48.353 "name": "key0", 00:14:48.353 "path": "/tmp/tmp.KYjIAIRN2h" 00:14:48.353 } 00:14:48.353 } 00:14:48.353 ] 00:14:48.353 }, 00:14:48.353 { 00:14:48.353 "subsystem": "iobuf", 00:14:48.353 "config": [ 00:14:48.353 { 00:14:48.353 "method": "iobuf_set_options", 00:14:48.353 "params": { 00:14:48.353 "small_pool_count": 8192, 00:14:48.353 "large_pool_count": 1024, 00:14:48.353 "small_bufsize": 8192, 00:14:48.353 "large_bufsize": 135168 00:14:48.353 } 00:14:48.353 } 00:14:48.353 ] 00:14:48.353 }, 00:14:48.353 { 00:14:48.353 "subsystem": "sock", 00:14:48.353 "config": [ 00:14:48.353 { 00:14:48.353 "method": "sock_set_default_impl", 00:14:48.353 "params": { 00:14:48.353 "impl_name": "uring" 00:14:48.353 } 00:14:48.353 }, 00:14:48.353 { 00:14:48.353 "method": "sock_impl_set_options", 00:14:48.353 "params": { 00:14:48.353 "impl_name": "ssl", 00:14:48.353 "recv_buf_size": 4096, 00:14:48.353 "send_buf_size": 4096, 00:14:48.353 "enable_recv_pipe": true, 00:14:48.353 "enable_quickack": false, 00:14:48.353 "enable_placement_id": 0, 00:14:48.353 "enable_zerocopy_send_server": true, 00:14:48.353 "enable_zerocopy_send_client": false, 00:14:48.353 "zerocopy_threshold": 0, 00:14:48.353 "tls_version": 0, 00:14:48.353 "enable_ktls": false 00:14:48.353 } 00:14:48.353 }, 00:14:48.353 { 00:14:48.353 "method": "sock_impl_set_options", 00:14:48.353 "params": { 00:14:48.353 "impl_name": "posix", 00:14:48.353 "recv_buf_size": 2097152, 00:14:48.353 "send_buf_size": 2097152, 00:14:48.353 "enable_recv_pipe": true, 00:14:48.353 "enable_quickack": false, 00:14:48.353 "enable_placement_id": 0, 00:14:48.353 "enable_zerocopy_send_server": true, 00:14:48.353 "enable_zerocopy_send_client": false, 00:14:48.353 "zerocopy_threshold": 0, 00:14:48.353 "tls_version": 0, 00:14:48.353 "enable_ktls": false 00:14:48.353 } 00:14:48.353 }, 00:14:48.353 { 00:14:48.353 "method": "sock_impl_set_options", 00:14:48.353 "params": { 00:14:48.353 "impl_name": "uring", 00:14:48.353 "recv_buf_size": 2097152, 00:14:48.353 "send_buf_size": 2097152, 00:14:48.353 "enable_recv_pipe": true, 00:14:48.353 "enable_quickack": false, 00:14:48.354 "enable_placement_id": 0, 00:14:48.354 "enable_zerocopy_send_server": false, 00:14:48.354 "enable_zerocopy_send_client": false, 00:14:48.354 "zerocopy_threshold": 0, 00:14:48.354 "tls_version": 0, 00:14:48.354 "enable_ktls": false 00:14:48.354 } 00:14:48.354 } 00:14:48.354 ] 00:14:48.354 }, 00:14:48.354 { 00:14:48.354 "subsystem": "vmd", 00:14:48.354 "config": [] 00:14:48.354 }, 00:14:48.354 { 00:14:48.354 "subsystem": "accel", 00:14:48.354 "config": [ 00:14:48.354 { 00:14:48.354 "method": "accel_set_options", 00:14:48.354 "params": { 00:14:48.354 "small_cache_size": 128, 00:14:48.354 "large_cache_size": 16, 00:14:48.354 "task_count": 2048, 00:14:48.354 "sequence_count": 2048, 00:14:48.354 "buf_count": 2048 00:14:48.354 } 00:14:48.354 } 00:14:48.354 ] 00:14:48.354 }, 00:14:48.354 { 00:14:48.354 "subsystem": "bdev", 00:14:48.354 "config": [ 00:14:48.354 { 00:14:48.354 "method": "bdev_set_options", 00:14:48.354 "params": { 00:14:48.354 "bdev_io_pool_size": 65535, 00:14:48.354 "bdev_io_cache_size": 256, 00:14:48.354 "bdev_auto_examine": true, 00:14:48.354 "iobuf_small_cache_size": 128, 00:14:48.354 "iobuf_large_cache_size": 16 00:14:48.354 } 00:14:48.354 }, 00:14:48.354 { 00:14:48.354 "method": "bdev_raid_set_options", 00:14:48.354 "params": { 00:14:48.354 "process_window_size_kb": 1024, 00:14:48.354 "process_max_bandwidth_mb_sec": 0 00:14:48.354 } 00:14:48.354 }, 00:14:48.354 { 00:14:48.354 "method": "bdev_iscsi_set_options", 00:14:48.354 "params": { 00:14:48.354 "timeout_sec": 30 00:14:48.354 } 00:14:48.354 }, 00:14:48.354 { 00:14:48.354 "method": "bdev_nvme_set_options", 00:14:48.354 "params": { 00:14:48.354 "action_on_timeout": "none", 00:14:48.354 "timeout_us": 0, 00:14:48.354 "timeout_admin_us": 0, 00:14:48.354 "keep_alive_timeout_ms": 10000, 00:14:48.354 "arbitration_burst": 0, 00:14:48.354 "low_priority_weight": 0, 00:14:48.354 "medium_priority_weight": 0, 00:14:48.354 "high_priority_weight": 0, 00:14:48.354 "nvme_adminq_poll_period_us": 10000, 00:14:48.354 "nvme_ioq_poll_period_us": 0, 00:14:48.354 "io_queue_requests": 0, 00:14:48.354 "delay_cmd_submit": true, 00:14:48.354 "transport_retry_count": 4, 00:14:48.354 "bdev_retry_count": 3, 00:14:48.354 "transport_ack_timeout": 0, 00:14:48.354 "ctrlr_loss_timeout_sec": 0, 00:14:48.354 "reconnect_delay_sec": 0, 00:14:48.354 "fast_io_fail_timeout_sec": 0, 00:14:48.354 "disable_auto_failback": false, 00:14:48.354 "generate_uuids": false, 00:14:48.354 "transport_tos": 0, 00:14:48.354 "nvme_error_stat": false, 00:14:48.354 "rdma_srq_size": 0, 00:14:48.354 "io_path_stat": false, 00:14:48.354 "allow_accel_sequence": false, 00:14:48.354 "rdma_max_cq_size": 0, 00:14:48.354 "rdma_cm_event_timeout_ms": 0, 00:14:48.354 "dhchap_digests": [ 00:14:48.354 "sha256", 00:14:48.354 "sha384", 00:14:48.354 "sha512" 00:14:48.354 ], 00:14:48.354 "dhchap_dhgroups": [ 00:14:48.354 "null", 00:14:48.354 "ffdhe2048", 00:14:48.354 "ffdhe3072", 00:14:48.354 "ffdhe4096", 00:14:48.354 "ffdhe6144", 00:14:48.354 "ffdhe8192" 00:14:48.354 ] 00:14:48.354 } 00:14:48.354 }, 00:14:48.354 { 00:14:48.354 "method": "bdev_nvme_set_hotplug", 00:14:48.354 "params": { 00:14:48.354 "period_us": 100000, 00:14:48.354 "enable": false 00:14:48.354 } 00:14:48.354 }, 00:14:48.354 { 00:14:48.354 "method": "bdev_malloc_create", 00:14:48.354 "params": { 00:14:48.354 "name": "malloc0", 00:14:48.354 "num_blocks": 8192, 00:14:48.354 "block_size": 4096, 00:14:48.354 "physical_block_size": 4096, 00:14:48.354 "uuid": "477c6d70-a25a-4232-85e1-380f6f8a110a", 00:14:48.354 "optimal_io_boundary": 0, 00:14:48.354 "md_size": 0, 00:14:48.354 "dif_type": 0, 00:14:48.354 "dif_is_head_of_md": false, 00:14:48.354 "dif_pi_format": 0 00:14:48.354 } 00:14:48.354 }, 00:14:48.354 { 00:14:48.354 "method": "bdev_wait_for_examine" 00:14:48.354 } 00:14:48.354 ] 00:14:48.354 }, 00:14:48.354 { 00:14:48.354 "subsystem": "nbd", 00:14:48.354 "config": [] 00:14:48.354 }, 00:14:48.354 { 00:14:48.354 "subsystem": "scheduler", 00:14:48.354 "config": [ 00:14:48.354 { 00:14:48.354 "method": "framework_set_scheduler", 00:14:48.354 "params": { 00:14:48.354 "name": "static" 00:14:48.354 } 00:14:48.354 } 00:14:48.354 ] 00:14:48.354 }, 00:14:48.354 { 00:14:48.354 "subsystem": "nvmf", 00:14:48.354 "config": [ 00:14:48.354 { 00:14:48.354 "method": "nvmf_set_config", 00:14:48.354 "params": { 00:14:48.354 "discovery_filter": "match_any", 00:14:48.354 "admin_cmd_passthru": { 00:14:48.354 "identify_ctrlr": false 00:14:48.354 }, 00:14:48.354 "dhchap_digests": [ 00:14:48.354 "sha256", 00:14:48.354 "sha384", 00:14:48.354 "sha512" 00:14:48.354 ], 00:14:48.354 "dhchap_dhgroups": [ 00:14:48.354 "null", 00:14:48.354 "ffdhe2048", 00:14:48.354 "ffdhe3072", 00:14:48.354 "ffdhe4096", 00:14:48.354 "ffdhe6144", 00:14:48.354 "ffdhe8192" 00:14:48.354 ] 00:14:48.354 } 00:14:48.354 }, 00:14:48.354 { 00:14:48.354 "method": "nvmf_set_max_subsystems", 00:14:48.354 "params": { 00:14:48.354 "max_subsystems": 1024 00:14:48.354 } 00:14:48.354 }, 00:14:48.354 { 00:14:48.354 "method": "nvmf_set_crdt", 00:14:48.354 "params": { 00:14:48.354 "crdt1": 0, 00:14:48.354 "crdt2": 0, 00:14:48.354 "crdt3": 0 00:14:48.354 } 00:14:48.354 }, 00:14:48.354 { 00:14:48.354 "method": "nvmf_create_transport", 00:14:48.354 "params": { 00:14:48.354 "trtype": "TCP", 00:14:48.354 "max_queue_depth": 128, 00:14:48.354 "max_io_qpairs_per_ctrlr": 127, 00:14:48.354 "in_capsule_data_size": 4096, 00:14:48.354 "max_io_size": 131072, 00:14:48.354 "io_unit_size": 131072, 00:14:48.354 "max_aq_depth": 128, 00:14:48.354 "num_shared_buffers": 511, 00:14:48.354 "buf_cache_size": 4294967295, 00:14:48.354 "dif_insert_or_strip": false, 00:14:48.354 "zcopy": false, 00:14:48.354 "c2h_success": false, 00:14:48.354 "sock_priority": 0, 00:14:48.354 "abort_timeout_sec": 1, 00:14:48.354 "ack_timeout": 0, 00:14:48.354 "data_wr_pool_size": 0 00:14:48.354 } 00:14:48.354 }, 00:14:48.354 { 00:14:48.354 "method": "nvmf_create_subsystem", 00:14:48.354 "params": { 00:14:48.354 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.354 "allow_any_host": false, 00:14:48.354 "serial_number": "SPDK00000000000001", 00:14:48.354 "model_number": "SPDK bdev Controller", 00:14:48.354 "max_namespaces": 10, 00:14:48.354 "min_cntlid": 1, 00:14:48.354 "max_cntlid": 65519, 00:14:48.354 "ana_reporting": false 00:14:48.354 } 00:14:48.354 }, 00:14:48.354 { 00:14:48.354 "method": "nvmf_subsystem_add_host", 00:14:48.354 "params": { 00:14:48.354 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.354 "host": "nqn.2016-06.io.spdk:host1", 00:14:48.354 "psk": "key0" 00:14:48.354 } 00:14:48.354 }, 00:14:48.354 { 00:14:48.354 "method": "nvmf_subsystem_add_ns", 00:14:48.354 "params": { 00:14:48.354 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.354 "namespace": { 00:14:48.354 "nsid": 1, 00:14:48.354 "bdev_name": "malloc0", 00:14:48.354 "nguid": "477C6D70A25A423285E1380F6F8A110A", 00:14:48.354 "uuid": "477c6d70-a25a-4232-85e1-380f6f8a110a", 00:14:48.354 "no_auto_visible": false 00:14:48.354 } 00:14:48.354 } 00:14:48.354 }, 00:14:48.354 { 00:14:48.354 "method": "nvmf_subsystem_add_listener", 00:14:48.354 "params": { 00:14:48.354 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.354 "listen_address": { 00:14:48.354 "trtype": "TCP", 00:14:48.354 "adrfam": "IPv4", 00:14:48.354 "traddr": "10.0.0.3", 00:14:48.354 "trsvcid": "4420" 00:14:48.354 }, 00:14:48.354 "secure_channel": true 00:14:48.354 } 00:14:48.354 } 00:14:48.354 ] 00:14:48.354 } 00:14:48.354 ] 00:14:48.354 }' 00:14:48.354 08:25:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:48.922 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:14:48.922 "subsystems": [ 00:14:48.922 { 00:14:48.922 "subsystem": "keyring", 00:14:48.922 "config": [ 00:14:48.922 { 00:14:48.922 "method": "keyring_file_add_key", 00:14:48.922 "params": { 00:14:48.922 "name": "key0", 00:14:48.922 "path": "/tmp/tmp.KYjIAIRN2h" 00:14:48.922 } 00:14:48.922 } 00:14:48.922 ] 00:14:48.922 }, 00:14:48.922 { 00:14:48.922 "subsystem": "iobuf", 00:14:48.922 "config": [ 00:14:48.922 { 00:14:48.922 "method": "iobuf_set_options", 00:14:48.922 "params": { 00:14:48.922 "small_pool_count": 8192, 00:14:48.922 "large_pool_count": 1024, 00:14:48.922 "small_bufsize": 8192, 00:14:48.922 "large_bufsize": 135168 00:14:48.922 } 00:14:48.922 } 00:14:48.922 ] 00:14:48.922 }, 00:14:48.922 { 00:14:48.922 "subsystem": "sock", 00:14:48.922 "config": [ 00:14:48.922 { 00:14:48.922 "method": "sock_set_default_impl", 00:14:48.922 "params": { 00:14:48.922 "impl_name": "uring" 00:14:48.922 } 00:14:48.922 }, 00:14:48.922 { 00:14:48.922 "method": "sock_impl_set_options", 00:14:48.922 "params": { 00:14:48.922 "impl_name": "ssl", 00:14:48.922 "recv_buf_size": 4096, 00:14:48.922 "send_buf_size": 4096, 00:14:48.922 "enable_recv_pipe": true, 00:14:48.922 "enable_quickack": false, 00:14:48.922 "enable_placement_id": 0, 00:14:48.922 "enable_zerocopy_send_server": true, 00:14:48.922 "enable_zerocopy_send_client": false, 00:14:48.922 "zerocopy_threshold": 0, 00:14:48.922 "tls_version": 0, 00:14:48.922 "enable_ktls": false 00:14:48.922 } 00:14:48.922 }, 00:14:48.922 { 00:14:48.922 "method": "sock_impl_set_options", 00:14:48.922 "params": { 00:14:48.922 "impl_name": "posix", 00:14:48.922 "recv_buf_size": 2097152, 00:14:48.922 "send_buf_size": 2097152, 00:14:48.922 "enable_recv_pipe": true, 00:14:48.922 "enable_quickack": false, 00:14:48.922 "enable_placement_id": 0, 00:14:48.922 "enable_zerocopy_send_server": true, 00:14:48.922 "enable_zerocopy_send_client": false, 00:14:48.922 "zerocopy_threshold": 0, 00:14:48.922 "tls_version": 0, 00:14:48.922 "enable_ktls": false 00:14:48.922 } 00:14:48.922 }, 00:14:48.922 { 00:14:48.922 "method": "sock_impl_set_options", 00:14:48.922 "params": { 00:14:48.922 "impl_name": "uring", 00:14:48.922 "recv_buf_size": 2097152, 00:14:48.922 "send_buf_size": 2097152, 00:14:48.922 "enable_recv_pipe": true, 00:14:48.922 "enable_quickack": false, 00:14:48.922 "enable_placement_id": 0, 00:14:48.922 "enable_zerocopy_send_server": false, 00:14:48.922 "enable_zerocopy_send_client": false, 00:14:48.922 "zerocopy_threshold": 0, 00:14:48.922 "tls_version": 0, 00:14:48.922 "enable_ktls": false 00:14:48.922 } 00:14:48.922 } 00:14:48.922 ] 00:14:48.922 }, 00:14:48.922 { 00:14:48.922 "subsystem": "vmd", 00:14:48.922 "config": [] 00:14:48.922 }, 00:14:48.922 { 00:14:48.922 "subsystem": "accel", 00:14:48.922 "config": [ 00:14:48.922 { 00:14:48.922 "method": "accel_set_options", 00:14:48.922 "params": { 00:14:48.922 "small_cache_size": 128, 00:14:48.922 "large_cache_size": 16, 00:14:48.922 "task_count": 2048, 00:14:48.922 "sequence_count": 2048, 00:14:48.922 "buf_count": 2048 00:14:48.922 } 00:14:48.922 } 00:14:48.922 ] 00:14:48.922 }, 00:14:48.922 { 00:14:48.922 "subsystem": "bdev", 00:14:48.922 "config": [ 00:14:48.922 { 00:14:48.922 "method": "bdev_set_options", 00:14:48.922 "params": { 00:14:48.922 "bdev_io_pool_size": 65535, 00:14:48.922 "bdev_io_cache_size": 256, 00:14:48.922 "bdev_auto_examine": true, 00:14:48.922 "iobuf_small_cache_size": 128, 00:14:48.922 "iobuf_large_cache_size": 16 00:14:48.922 } 00:14:48.922 }, 00:14:48.922 { 00:14:48.922 "method": "bdev_raid_set_options", 00:14:48.922 "params": { 00:14:48.922 "process_window_size_kb": 1024, 00:14:48.922 "process_max_bandwidth_mb_sec": 0 00:14:48.922 } 00:14:48.922 }, 00:14:48.922 { 00:14:48.922 "method": "bdev_iscsi_set_options", 00:14:48.922 "params": { 00:14:48.922 "timeout_sec": 30 00:14:48.922 } 00:14:48.922 }, 00:14:48.922 { 00:14:48.922 "method": "bdev_nvme_set_options", 00:14:48.922 "params": { 00:14:48.922 "action_on_timeout": "none", 00:14:48.922 "timeout_us": 0, 00:14:48.922 "timeout_admin_us": 0, 00:14:48.922 "keep_alive_timeout_ms": 10000, 00:14:48.922 "arbitration_burst": 0, 00:14:48.922 "low_priority_weight": 0, 00:14:48.922 "medium_priority_weight": 0, 00:14:48.922 "high_priority_weight": 0, 00:14:48.922 "nvme_adminq_poll_period_us": 10000, 00:14:48.922 "nvme_ioq_poll_period_us": 0, 00:14:48.922 "io_queue_requests": 512, 00:14:48.922 "delay_cmd_submit": true, 00:14:48.922 "transport_retry_count": 4, 00:14:48.922 "bdev_retry_count": 3, 00:14:48.922 "transport_ack_timeout": 0, 00:14:48.922 "ctrlr_loss_timeout_sec": 0, 00:14:48.922 "reconnect_delay_sec": 0, 00:14:48.922 "fast_io_fail_timeout_sec": 0, 00:14:48.922 "disable_auto_failback": false, 00:14:48.922 "generate_uuids": false, 00:14:48.922 "transport_tos": 0, 00:14:48.922 "nvme_error_stat": false, 00:14:48.922 "rdma_srq_size": 0, 00:14:48.922 "io_path_stat": false, 00:14:48.922 "allow_accel_sequence": false, 00:14:48.922 "rdma_max_cq_size": 0, 00:14:48.922 "rdma_cm_event_timeout_ms": 0, 00:14:48.922 "dhchap_digests": [ 00:14:48.922 "sha256", 00:14:48.922 "sha384", 00:14:48.922 "sha512" 00:14:48.922 ], 00:14:48.922 "dhchap_dhgroups": [ 00:14:48.922 "null", 00:14:48.922 "ffdhe2048", 00:14:48.922 "ffdhe3072", 00:14:48.922 "ffdhe4096", 00:14:48.922 "ffdhe6144", 00:14:48.922 "ffdhe8192" 00:14:48.923 ] 00:14:48.923 } 00:14:48.923 }, 00:14:48.923 { 00:14:48.923 "method": "bdev_nvme_attach_controller", 00:14:48.923 "params": { 00:14:48.923 "name": "TLSTEST", 00:14:48.923 "trtype": "TCP", 00:14:48.923 "adrfam": "IPv4", 00:14:48.923 "traddr": "10.0.0.3", 00:14:48.923 "trsvcid": "4420", 00:14:48.923 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:48.923 "prchk_reftag": false, 00:14:48.923 "prchk_guard": false, 00:14:48.923 "ctrlr_loss_timeout_sec": 0, 00:14:48.923 "reconnect_delay_sec": 0, 00:14:48.923 "fast_io_fail_timeout_sec": 0, 00:14:48.923 "psk": "key0", 00:14:48.923 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:48.923 "hdgst": false, 00:14:48.923 "ddgst": false, 00:14:48.923 "multipath": "multipath" 00:14:48.923 } 00:14:48.923 }, 00:14:48.923 { 00:14:48.923 "method": "bdev_nvme_set_hotplug", 00:14:48.923 "params": { 00:14:48.923 "period_us": 100000, 00:14:48.923 "enable": false 00:14:48.923 } 00:14:48.923 }, 00:14:48.923 { 00:14:48.923 "method": "bdev_wait_for_examine" 00:14:48.923 } 00:14:48.923 ] 00:14:48.923 }, 00:14:48.923 { 00:14:48.923 "subsystem": "nbd", 00:14:48.923 "config": [] 00:14:48.923 } 00:14:48.923 ] 00:14:48.923 }' 00:14:48.923 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72647 00:14:48.923 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72647 ']' 00:14:48.923 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72647 00:14:48.923 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:48.923 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:48.923 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72647 00:14:48.923 killing process with pid 72647 00:14:48.923 Received shutdown signal, test time was about 10.000000 seconds 00:14:48.923 00:14:48.923 Latency(us) 00:14:48.923 [2024-10-15T08:25:50.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.923 [2024-10-15T08:25:50.654Z] =================================================================================================================== 00:14:48.923 [2024-10-15T08:25:50.654Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:48.923 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:48.923 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:48.923 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72647' 00:14:48.923 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72647 00:14:48.923 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72647 00:14:49.181 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72591 00:14:49.181 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72591 ']' 00:14:49.181 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72591 00:14:49.181 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:49.181 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:49.182 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72591 00:14:49.182 killing process with pid 72591 00:14:49.182 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:49.182 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:49.182 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72591' 00:14:49.182 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72591 00:14:49.182 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72591 00:14:49.440 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:49.440 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:49.440 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:49.440 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:49.440 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:14:49.440 "subsystems": [ 00:14:49.440 { 00:14:49.440 "subsystem": "keyring", 00:14:49.440 "config": [ 00:14:49.440 { 00:14:49.440 "method": "keyring_file_add_key", 00:14:49.440 "params": { 00:14:49.440 "name": "key0", 00:14:49.440 "path": "/tmp/tmp.KYjIAIRN2h" 00:14:49.440 } 00:14:49.440 } 00:14:49.440 ] 00:14:49.440 }, 00:14:49.440 { 00:14:49.440 "subsystem": "iobuf", 00:14:49.440 "config": [ 00:14:49.440 { 00:14:49.440 "method": "iobuf_set_options", 00:14:49.440 "params": { 00:14:49.440 "small_pool_count": 8192, 00:14:49.440 "large_pool_count": 1024, 00:14:49.440 "small_bufsize": 8192, 00:14:49.440 "large_bufsize": 135168 00:14:49.440 } 00:14:49.440 } 00:14:49.440 ] 00:14:49.440 }, 00:14:49.440 { 00:14:49.440 "subsystem": "sock", 00:14:49.440 "config": [ 00:14:49.440 { 00:14:49.440 "method": "sock_set_default_impl", 00:14:49.440 "params": { 00:14:49.440 "impl_name": "uring" 00:14:49.440 } 00:14:49.440 }, 00:14:49.440 { 00:14:49.440 "method": "sock_impl_set_options", 00:14:49.440 "params": { 00:14:49.440 "impl_name": "ssl", 00:14:49.440 "recv_buf_size": 4096, 00:14:49.440 "send_buf_size": 4096, 00:14:49.440 "enable_recv_pipe": true, 00:14:49.440 "enable_quickack": false, 00:14:49.440 "enable_placement_id": 0, 00:14:49.440 "enable_zerocopy_send_server": true, 00:14:49.440 "enable_zerocopy_send_client": false, 00:14:49.440 "zerocopy_threshold": 0, 00:14:49.440 "tls_version": 0, 00:14:49.440 "enable_ktls": false 00:14:49.440 } 00:14:49.440 }, 00:14:49.440 { 00:14:49.440 "method": "sock_impl_set_options", 00:14:49.440 "params": { 00:14:49.440 "impl_name": "posix", 00:14:49.440 "recv_buf_size": 2097152, 00:14:49.440 "send_buf_size": 2097152, 00:14:49.440 "enable_recv_pipe": true, 00:14:49.440 "enable_quickack": false, 00:14:49.440 "enable_placement_id": 0, 00:14:49.440 "enable_zerocopy_send_server": true, 00:14:49.440 "enable_zerocopy_send_client": false, 00:14:49.440 "zerocopy_threshold": 0, 00:14:49.440 "tls_version": 0, 00:14:49.440 "enable_ktls": false 00:14:49.440 } 00:14:49.440 }, 00:14:49.440 { 00:14:49.441 "method": "sock_impl_set_options", 00:14:49.441 "params": { 00:14:49.441 "impl_name": "uring", 00:14:49.441 "recv_buf_size": 2097152, 00:14:49.441 "send_buf_size": 2097152, 00:14:49.441 "enable_recv_pipe": true, 00:14:49.441 "enable_quickack": false, 00:14:49.441 "enable_placement_id": 0, 00:14:49.441 "enable_zerocopy_send_server": false, 00:14:49.441 "enable_zerocopy_send_client": false, 00:14:49.441 "zerocopy_threshold": 0, 00:14:49.441 "tls_version": 0, 00:14:49.441 "enable_ktls": false 00:14:49.441 } 00:14:49.441 } 00:14:49.441 ] 00:14:49.441 }, 00:14:49.441 { 00:14:49.441 "subsystem": "vmd", 00:14:49.441 "config": [] 00:14:49.441 }, 00:14:49.441 { 00:14:49.441 "subsystem": "accel", 00:14:49.441 "config": [ 00:14:49.441 { 00:14:49.441 "method": "accel_set_options", 00:14:49.441 "params": { 00:14:49.441 "small_cache_size": 128, 00:14:49.441 "large_cache_size": 16, 00:14:49.441 "task_count": 2048, 00:14:49.441 "sequence_count": 2048, 00:14:49.441 "buf_count": 2048 00:14:49.441 } 00:14:49.441 } 00:14:49.441 ] 00:14:49.441 }, 00:14:49.441 { 00:14:49.441 "subsystem": "bdev", 00:14:49.441 "config": [ 00:14:49.441 { 00:14:49.441 "method": "bdev_set_options", 00:14:49.441 "params": { 00:14:49.441 "bdev_io_pool_size": 65535, 00:14:49.441 "bdev_io_cache_size": 256, 00:14:49.441 "bdev_auto_examine": true, 00:14:49.441 "iobuf_small_cache_size": 128, 00:14:49.441 "iobuf_large_cache_size": 16 00:14:49.441 } 00:14:49.441 }, 00:14:49.441 { 00:14:49.441 "method": "bdev_raid_set_options", 00:14:49.441 "params": { 00:14:49.441 "process_window_size_kb": 1024, 00:14:49.441 "process_max_bandwidth_mb_sec": 0 00:14:49.441 } 00:14:49.441 }, 00:14:49.441 { 00:14:49.441 "method": "bdev_iscsi_set_options", 00:14:49.441 "params": { 00:14:49.441 "timeout_sec": 30 00:14:49.441 } 00:14:49.441 }, 00:14:49.441 { 00:14:49.441 "method": "bdev_nvme_set_options", 00:14:49.441 "params": { 00:14:49.441 "action_on_timeout": "none", 00:14:49.441 "timeout_us": 0, 00:14:49.441 "timeout_admin_us": 0, 00:14:49.441 "keep_alive_timeout_ms": 10000, 00:14:49.441 "arbitration_burst": 0, 00:14:49.441 "low_priority_weight": 0, 00:14:49.441 "medium_priority_weight": 0, 00:14:49.441 "high_priority_weight": 0, 00:14:49.441 "nvme_adminq_poll_period_us": 10000, 00:14:49.441 "nvme_ioq_poll_period_us": 0, 00:14:49.441 "io_queue_requests": 0, 00:14:49.441 "delay_cmd_submit": true, 00:14:49.441 "transport_retry_count": 4, 00:14:49.441 "bdev_retry_count": 3, 00:14:49.441 "transport_ack_timeout": 0, 00:14:49.441 "ctrlr_loss_timeout_sec": 0, 00:14:49.441 "reconnect_delay_sec": 0, 00:14:49.441 "fast_io_fail_timeout_sec": 0, 00:14:49.441 "disable_auto_failback": false, 00:14:49.441 "generate_uuids": false, 00:14:49.441 "transport_tos": 0, 00:14:49.441 "nvme_error_stat": false, 00:14:49.441 "rdma_srq_size": 0, 00:14:49.441 "io_path_stat": false, 00:14:49.441 "allow_accel_sequence": false, 00:14:49.441 "rdma_max_cq_size": 0, 00:14:49.441 "rdma_cm_event_timeout_ms": 0, 00:14:49.441 "dhchap_digests": [ 00:14:49.441 "sha256", 00:14:49.441 "sha384", 00:14:49.441 "sha512" 00:14:49.441 ], 00:14:49.441 "dhchap_dhgroups": [ 00:14:49.441 "null", 00:14:49.441 "ffdhe2048", 00:14:49.441 "ffdhe3072", 00:14:49.441 "ffdhe4096", 00:14:49.441 "ffdhe6144", 00:14:49.441 "ffdhe8192" 00:14:49.441 ] 00:14:49.441 } 00:14:49.441 }, 00:14:49.441 { 00:14:49.441 "method": "bdev_nvme_set_hotplug", 00:14:49.441 "params": { 00:14:49.441 "period_us": 100000, 00:14:49.441 "enable": false 00:14:49.441 } 00:14:49.441 }, 00:14:49.441 { 00:14:49.441 "method": "bdev_malloc_create", 00:14:49.441 "params": { 00:14:49.441 "name": "malloc0", 00:14:49.441 "num_blocks": 8192, 00:14:49.441 "block_size": 4096, 00:14:49.441 "physical_block_size": 4096, 00:14:49.441 "uuid": "477c6d70-a25a-4232-85e1-380f6f8a110a", 00:14:49.441 "optimal_io_boundary": 0, 00:14:49.441 "md_size": 0, 00:14:49.441 "dif_type": 0, 00:14:49.441 "dif_is_head_of_md": false, 00:14:49.441 "dif_pi_format": 0 00:14:49.441 } 00:14:49.441 }, 00:14:49.441 { 00:14:49.441 "method": "bdev_wait_for_examine" 00:14:49.441 } 00:14:49.441 ] 00:14:49.441 }, 00:14:49.441 { 00:14:49.441 "subsystem": "nbd", 00:14:49.441 "config": [] 00:14:49.441 }, 00:14:49.441 { 00:14:49.441 "subsystem": "scheduler", 00:14:49.441 "config": [ 00:14:49.441 { 00:14:49.441 "method": "framework_set_scheduler", 00:14:49.441 "params": { 00:14:49.441 "name": "static" 00:14:49.441 } 00:14:49.441 } 00:14:49.441 ] 00:14:49.441 }, 00:14:49.441 { 00:14:49.441 "subsystem": "nvmf", 00:14:49.441 "config": [ 00:14:49.441 { 00:14:49.441 "method": "nvmf_set_config", 00:14:49.441 "params": { 00:14:49.441 "discovery_filter": "match_any", 00:14:49.441 "admin_cmd_passthru": { 00:14:49.441 "identify_ctrlr": false 00:14:49.441 }, 00:14:49.441 "dhchap_digests": [ 00:14:49.441 "sha256", 00:14:49.441 "sha384", 00:14:49.441 "sha512" 00:14:49.441 ], 00:14:49.441 "dhchap_dhgroups": [ 00:14:49.441 "null", 00:14:49.441 "ffdhe2048", 00:14:49.441 "ffdhe3072", 00:14:49.441 "ffdhe4096", 00:14:49.441 "ffdhe6144", 00:14:49.441 "ffdhe8192" 00:14:49.441 ] 00:14:49.441 } 00:14:49.441 }, 00:14:49.441 { 00:14:49.441 "method": "nvmf_set_max_subsystems", 00:14:49.441 "params": { 00:14:49.441 "max_subsystems": 1024 00:14:49.441 } 00:14:49.441 }, 00:14:49.441 { 00:14:49.441 "method": "nvmf_set_crdt", 00:14:49.441 "params": { 00:14:49.441 "crdt1": 0, 00:14:49.441 "crdt2": 0, 00:14:49.441 "crdt3": 0 00:14:49.441 } 00:14:49.441 }, 00:14:49.441 { 00:14:49.441 "method": "nvmf_create_transport", 00:14:49.441 "params": { 00:14:49.441 "trtype": "TCP", 00:14:49.441 "max_queue_depth": 128, 00:14:49.441 "max_io_qpairs_per_ctrlr": 127, 00:14:49.441 "in_capsule_data_size": 4096, 00:14:49.441 "max_io_size": 131072, 00:14:49.441 "io_unit_size": 131072, 00:14:49.441 "max_aq_depth": 128, 00:14:49.441 "num_shared_buffers": 511, 00:14:49.441 "buf_cache_size": 4294967295, 00:14:49.441 "dif_insert_or_strip": false, 00:14:49.441 "zcopy": false, 00:14:49.441 "c2h_success": false, 00:14:49.441 "sock_priority": 0, 00:14:49.441 "abort_timeout_sec": 1, 00:14:49.441 "ack_timeout": 0, 00:14:49.441 "data_wr_pool_size": 0 00:14:49.441 } 00:14:49.441 }, 00:14:49.441 { 00:14:49.441 "method": "nvmf_create_subsystem", 00:14:49.441 "params": { 00:14:49.441 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:49.441 "allow_any_host": false, 00:14:49.441 "serial_number": "SPDK00000000000001", 00:14:49.441 "model_number": "SPDK bdev Controller", 00:14:49.441 "max_namespaces": 10, 00:14:49.441 "min_cntlid": 1, 00:14:49.441 "max_cntlid": 65519, 00:14:49.441 "ana_reporting": false 00:14:49.441 } 00:14:49.441 }, 00:14:49.441 { 00:14:49.441 "method": "nvmf_subsystem_add_host", 00:14:49.441 "params": { 00:14:49.441 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:49.441 "host": "nqn.2016-06.io.spdk:host1", 00:14:49.441 "psk": "key0" 00:14:49.441 } 00:14:49.441 }, 00:14:49.441 { 00:14:49.441 "method": "nvmf_subsystem_add_ns", 00:14:49.441 "params": { 00:14:49.441 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:49.441 "namespace": { 00:14:49.441 "nsid": 1, 00:14:49.441 "bdev_name": "malloc0", 00:14:49.441 "nguid": "477C6D70A25A423285E1380F6F8A110A", 00:14:49.442 "uuid": "477c6d70-a25a-4232-85e1-380f6f8a110a", 00:14:49.442 "no_auto_visible": false 00:14:49.442 } 00:14:49.442 } 00:14:49.442 }, 00:14:49.442 { 00:14:49.442 "method": "nvmf_subsystem_add_listener", 00:14:49.442 "params": { 00:14:49.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:49.442 "listen_address": { 00:14:49.442 "trtype": "TCP", 00:14:49.442 "adrfam": "IPv4", 00:14:49.442 "traddr": "10.0.0.3", 00:14:49.442 "trsvcid": "4420" 00:14:49.442 }, 00:14:49.442 "secure_channel": true 00:14:49.442 } 00:14:49.442 } 00:14:49.442 ] 00:14:49.442 } 00:14:49.442 ] 00:14:49.442 }' 00:14:49.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.442 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72702 00:14:49.442 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72702 00:14:49.442 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72702 ']' 00:14:49.442 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:49.442 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.442 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:49.442 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.442 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:49.442 08:25:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:49.442 [2024-10-15 08:25:51.039339] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:14:49.442 [2024-10-15 08:25:51.039738] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.699 [2024-10-15 08:25:51.177813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.699 [2024-10-15 08:25:51.245510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.699 [2024-10-15 08:25:51.245896] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.699 [2024-10-15 08:25:51.246047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.699 [2024-10-15 08:25:51.246104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.699 [2024-10-15 08:25:51.246242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.699 [2024-10-15 08:25:51.246827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.957 [2024-10-15 08:25:51.434273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:49.957 [2024-10-15 08:25:51.527664] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.957 [2024-10-15 08:25:51.559596] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:49.958 [2024-10-15 08:25:51.559840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:50.524 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:50.524 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:50.524 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:50.524 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:50.524 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:50.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:50.525 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.525 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72734 00:14:50.525 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72734 /var/tmp/bdevperf.sock 00:14:50.525 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72734 ']' 00:14:50.525 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:50.525 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:50.525 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:50.525 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:50.525 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:50.525 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:50.525 08:25:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:14:50.525 "subsystems": [ 00:14:50.525 { 00:14:50.525 "subsystem": "keyring", 00:14:50.525 "config": [ 00:14:50.525 { 00:14:50.525 "method": "keyring_file_add_key", 00:14:50.525 "params": { 00:14:50.525 "name": "key0", 00:14:50.525 "path": "/tmp/tmp.KYjIAIRN2h" 00:14:50.525 } 00:14:50.525 } 00:14:50.525 ] 00:14:50.525 }, 00:14:50.525 { 00:14:50.525 "subsystem": "iobuf", 00:14:50.525 "config": [ 00:14:50.525 { 00:14:50.525 "method": "iobuf_set_options", 00:14:50.525 "params": { 00:14:50.525 "small_pool_count": 8192, 00:14:50.525 "large_pool_count": 1024, 00:14:50.525 "small_bufsize": 8192, 00:14:50.525 "large_bufsize": 135168 00:14:50.525 } 00:14:50.525 } 00:14:50.525 ] 00:14:50.525 }, 00:14:50.525 { 00:14:50.525 "subsystem": "sock", 00:14:50.525 "config": [ 00:14:50.525 { 00:14:50.525 "method": "sock_set_default_impl", 00:14:50.525 "params": { 00:14:50.525 "impl_name": "uring" 00:14:50.525 } 00:14:50.525 }, 00:14:50.525 { 00:14:50.525 "method": "sock_impl_set_options", 00:14:50.525 "params": { 00:14:50.525 "impl_name": "ssl", 00:14:50.525 "recv_buf_size": 4096, 00:14:50.525 "send_buf_size": 4096, 00:14:50.525 "enable_recv_pipe": true, 00:14:50.525 "enable_quickack": false, 00:14:50.525 "enable_placement_id": 0, 00:14:50.525 "enable_zerocopy_send_server": true, 00:14:50.525 "enable_zerocopy_send_client": false, 00:14:50.525 "zerocopy_threshold": 0, 00:14:50.525 "tls_version": 0, 00:14:50.525 "enable_ktls": false 00:14:50.525 } 00:14:50.525 }, 00:14:50.525 { 00:14:50.525 "method": "sock_impl_set_options", 00:14:50.525 "params": { 00:14:50.525 "impl_name": "posix", 00:14:50.525 "recv_buf_size": 2097152, 00:14:50.525 "send_buf_size": 2097152, 00:14:50.525 "enable_recv_pipe": true, 00:14:50.525 "enable_quickack": false, 00:14:50.525 "enable_placement_id": 0, 00:14:50.525 "enable_zerocopy_send_server": true, 00:14:50.525 "enable_zerocopy_send_client": false, 00:14:50.525 "zerocopy_threshold": 0, 00:14:50.525 "tls_version": 0, 00:14:50.525 "enable_ktls": false 00:14:50.525 } 00:14:50.525 }, 00:14:50.525 { 00:14:50.525 "method": "sock_impl_set_options", 00:14:50.525 "params": { 00:14:50.525 "impl_name": "uring", 00:14:50.525 "recv_buf_size": 2097152, 00:14:50.525 "send_buf_size": 2097152, 00:14:50.525 "enable_recv_pipe": true, 00:14:50.525 "enable_quickack": false, 00:14:50.525 "enable_placement_id": 0, 00:14:50.525 "enable_zerocopy_send_server": false, 00:14:50.525 "enable_zerocopy_send_client": false, 00:14:50.525 "zerocopy_threshold": 0, 00:14:50.525 "tls_version": 0, 00:14:50.525 "enable_ktls": false 00:14:50.525 } 00:14:50.525 } 00:14:50.525 ] 00:14:50.525 }, 00:14:50.525 { 00:14:50.525 "subsystem": "vmd", 00:14:50.525 "config": [] 00:14:50.525 }, 00:14:50.525 { 00:14:50.525 "subsystem": "accel", 00:14:50.525 "config": [ 00:14:50.525 { 00:14:50.525 "method": "accel_set_options", 00:14:50.525 "params": { 00:14:50.525 "small_cache_size": 128, 00:14:50.525 "large_cache_size": 16, 00:14:50.525 "task_count": 2048, 00:14:50.525 "sequence_count": 2048, 00:14:50.525 "buf_count": 2048 00:14:50.525 } 00:14:50.525 } 00:14:50.525 ] 00:14:50.525 }, 00:14:50.525 { 00:14:50.525 "subsystem": "bdev", 00:14:50.525 "config": [ 00:14:50.525 { 00:14:50.525 "method": "bdev_set_options", 00:14:50.525 "params": { 00:14:50.525 "bdev_io_pool_size": 65535, 00:14:50.525 "bdev_io_cache_size": 256, 00:14:50.525 "bdev_auto_examine": true, 00:14:50.525 "iobuf_small_cache_size": 128, 00:14:50.525 "iobuf_large_cache_size": 16 00:14:50.525 } 00:14:50.525 }, 00:14:50.525 { 00:14:50.525 "method": "bdev_raid_set_options", 00:14:50.525 "params": { 00:14:50.525 "process_window_size_kb": 1024, 00:14:50.525 "process_max_bandwidth_mb_sec": 0 00:14:50.525 } 00:14:50.525 }, 00:14:50.525 { 00:14:50.525 "method": "bdev_iscsi_set_options", 00:14:50.525 "params": { 00:14:50.525 "timeout_sec": 30 00:14:50.525 } 00:14:50.525 }, 00:14:50.525 { 00:14:50.525 "method": "bdev_nvme_set_options", 00:14:50.525 "params": { 00:14:50.525 "action_on_timeout": "none", 00:14:50.525 "timeout_us": 0, 00:14:50.525 "timeout_admin_us": 0, 00:14:50.525 "keep_alive_timeout_ms": 10000, 00:14:50.525 "arbitration_burst": 0, 00:14:50.525 "low_priority_weight": 0, 00:14:50.525 "medium_priority_weight": 0, 00:14:50.525 "high_priority_weight": 0, 00:14:50.525 "nvme_adminq_poll_period_us": 10000, 00:14:50.525 "nvme_ioq_poll_period_us": 0, 00:14:50.525 "io_queue_requests": 512, 00:14:50.525 "delay_cmd_submit": true, 00:14:50.525 "transport_retry_count": 4, 00:14:50.525 "bdev_retry_count": 3, 00:14:50.525 "transport_ack_timeout": 0, 00:14:50.525 "ctrlr_loss_timeout_sec": 0, 00:14:50.525 "reconnect_delay_sec": 0, 00:14:50.525 "fast_io_fail_timeout_sec": 0, 00:14:50.525 "disable_auto_failback": false, 00:14:50.525 "generate_uuids": false, 00:14:50.525 "transport_tos": 0, 00:14:50.525 "nvme_error_stat": false, 00:14:50.525 "rdma_srq_size": 0, 00:14:50.525 "io_path_stat": false, 00:14:50.525 "allow_accel_sequence": false, 00:14:50.525 "rdma_max_cq_size": 0, 00:14:50.525 "rdma_cm_event_timeout_ms": 0, 00:14:50.525 "dhchap_digests": [ 00:14:50.525 "sha256", 00:14:50.525 "sha384", 00:14:50.525 "sha512" 00:14:50.525 ], 00:14:50.525 "dhchap_dhgroups": [ 00:14:50.525 "null", 00:14:50.525 "ffdhe2048", 00:14:50.525 "ffdhe3072", 00:14:50.525 "ffdhe4096", 00:14:50.525 "ffdhe6144", 00:14:50.525 "ffdhe8192" 00:14:50.525 ] 00:14:50.525 } 00:14:50.525 }, 00:14:50.525 { 00:14:50.525 "method": "bdev_nvme_attach_controller", 00:14:50.525 "params": { 00:14:50.525 "name": "TLSTEST", 00:14:50.525 "trtype": "TCP", 00:14:50.525 "adrfam": "IPv4", 00:14:50.525 "traddr": "10.0.0.3", 00:14:50.525 "trsvcid": "4420", 00:14:50.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:50.525 "prchk_reftag": false, 00:14:50.525 "prchk_guard": false, 00:14:50.525 "ctrlr_loss_timeout_sec": 0, 00:14:50.525 "reconnect_delay_sec": 0, 00:14:50.525 "fast_io_fail_timeout_sec": 0, 00:14:50.525 "psk": "key0", 00:14:50.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:50.525 "hdgst": false, 00:14:50.525 "ddgst": false, 00:14:50.525 "multipath": "multipath" 00:14:50.525 } 00:14:50.525 }, 00:14:50.525 { 00:14:50.525 "method": "bdev_nvme_set_hotplug", 00:14:50.525 "params": { 00:14:50.525 "period_us": 100000, 00:14:50.525 "enable": false 00:14:50.525 } 00:14:50.525 }, 00:14:50.525 { 00:14:50.525 "method": "bdev_wait_for_examine" 00:14:50.525 } 00:14:50.525 ] 00:14:50.525 }, 00:14:50.525 { 00:14:50.525 "subsystem": "nbd", 00:14:50.525 "config": [] 00:14:50.525 } 00:14:50.525 ] 00:14:50.525 }' 00:14:50.525 [2024-10-15 08:25:52.168413] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:14:50.525 [2024-10-15 08:25:52.168540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72734 ] 00:14:50.784 [2024-10-15 08:25:52.307555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.784 [2024-10-15 08:25:52.381633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.074 [2024-10-15 08:25:52.538754] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:51.074 [2024-10-15 08:25:52.597965] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:51.669 08:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:51.669 08:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:51.669 08:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:51.669 Running I/O for 10 seconds... 00:14:53.979 3773.00 IOPS, 14.74 MiB/s [2024-10-15T08:25:56.646Z] 3776.00 IOPS, 14.75 MiB/s [2024-10-15T08:25:57.595Z] 3754.67 IOPS, 14.67 MiB/s [2024-10-15T08:25:58.530Z] 3770.75 IOPS, 14.73 MiB/s [2024-10-15T08:25:59.465Z] 3788.80 IOPS, 14.80 MiB/s [2024-10-15T08:26:00.400Z] 3789.17 IOPS, 14.80 MiB/s [2024-10-15T08:26:01.334Z] 3827.71 IOPS, 14.95 MiB/s [2024-10-15T08:26:02.717Z] 3841.38 IOPS, 15.01 MiB/s [2024-10-15T08:26:03.652Z] 3858.22 IOPS, 15.07 MiB/s [2024-10-15T08:26:03.652Z] 3870.30 IOPS, 15.12 MiB/s 00:15:01.921 Latency(us) 00:15:01.921 [2024-10-15T08:26:03.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.921 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:01.921 Verification LBA range: start 0x0 length 0x2000 00:15:01.921 TLSTESTn1 : 10.02 3876.06 15.14 0.00 0.00 32962.60 7119.59 24665.37 00:15:01.921 [2024-10-15T08:26:03.652Z] =================================================================================================================== 00:15:01.921 [2024-10-15T08:26:03.652Z] Total : 3876.06 15.14 0.00 0.00 32962.60 7119.59 24665.37 00:15:01.921 { 00:15:01.921 "results": [ 00:15:01.921 { 00:15:01.921 "job": "TLSTESTn1", 00:15:01.921 "core_mask": "0x4", 00:15:01.921 "workload": "verify", 00:15:01.921 "status": "finished", 00:15:01.921 "verify_range": { 00:15:01.921 "start": 0, 00:15:01.921 "length": 8192 00:15:01.921 }, 00:15:01.921 "queue_depth": 128, 00:15:01.921 "io_size": 4096, 00:15:01.921 "runtime": 10.017401, 00:15:01.921 "iops": 3876.0552762138605, 00:15:01.921 "mibps": 15.140840922710392, 00:15:01.921 "io_failed": 0, 00:15:01.921 "io_timeout": 0, 00:15:01.921 "avg_latency_us": 32962.59883982505, 00:15:01.921 "min_latency_us": 7119.592727272728, 00:15:01.921 "max_latency_us": 24665.36727272727 00:15:01.921 } 00:15:01.921 ], 00:15:01.921 "core_count": 1 00:15:01.921 } 00:15:01.921 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:01.921 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72734 00:15:01.921 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72734 ']' 00:15:01.921 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72734 00:15:01.921 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:01.921 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:01.921 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72734 00:15:01.921 killing process with pid 72734 00:15:01.921 Received shutdown signal, test time was about 10.000000 seconds 00:15:01.921 00:15:01.921 Latency(us) 00:15:01.921 [2024-10-15T08:26:03.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.921 [2024-10-15T08:26:03.652Z] =================================================================================================================== 00:15:01.921 [2024-10-15T08:26:03.652Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:01.921 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:01.921 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:01.921 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72734' 00:15:01.921 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72734 00:15:01.921 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72734 00:15:02.179 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72702 00:15:02.179 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72702 ']' 00:15:02.179 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72702 00:15:02.180 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:02.180 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:02.180 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72702 00:15:02.180 killing process with pid 72702 00:15:02.180 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:02.180 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:02.180 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72702' 00:15:02.180 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72702 00:15:02.180 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72702 00:15:02.438 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:15:02.438 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:02.438 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:02.438 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.438 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72874 00:15:02.438 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:02.438 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72874 00:15:02.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.438 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72874 ']' 00:15:02.438 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.438 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:02.438 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.438 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:02.438 08:26:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.438 [2024-10-15 08:26:04.051393] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:15:02.438 [2024-10-15 08:26:04.051494] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.697 [2024-10-15 08:26:04.190015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.697 [2024-10-15 08:26:04.272930] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.697 [2024-10-15 08:26:04.273010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.697 [2024-10-15 08:26:04.273037] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.697 [2024-10-15 08:26:04.273048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.697 [2024-10-15 08:26:04.273058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.697 [2024-10-15 08:26:04.273651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.697 [2024-10-15 08:26:04.355098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:02.956 08:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:02.956 08:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:02.956 08:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:02.956 08:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:02.956 08:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.956 08:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.956 08:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.KYjIAIRN2h 00:15:02.956 08:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KYjIAIRN2h 00:15:02.956 08:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:03.216 [2024-10-15 08:26:04.786162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.216 08:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:03.492 08:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:03.755 [2024-10-15 08:26:05.310361] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:03.755 [2024-10-15 08:26:05.311057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:03.755 08:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:04.013 malloc0 00:15:04.013 08:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:04.272 08:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KYjIAIRN2h 00:15:04.531 08:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:04.790 08:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72922 00:15:04.790 08:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:04.790 08:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:04.790 08:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72922 /var/tmp/bdevperf.sock 00:15:04.790 08:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72922 ']' 00:15:04.790 08:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:04.790 08:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:04.790 08:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:04.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:04.790 08:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:04.790 08:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:05.048 [2024-10-15 08:26:06.536631] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:15:05.048 [2024-10-15 08:26:06.537061] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72922 ] 00:15:05.048 [2024-10-15 08:26:06.674991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.048 [2024-10-15 08:26:06.759796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.307 [2024-10-15 08:26:06.834091] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:05.307 08:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:05.307 08:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:05.307 08:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KYjIAIRN2h 00:15:05.566 08:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:05.825 [2024-10-15 08:26:07.470039] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:05.825 nvme0n1 00:15:06.084 08:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:06.084 Running I/O for 1 seconds... 00:15:07.034 3840.00 IOPS, 15.00 MiB/s 00:15:07.034 Latency(us) 00:15:07.034 [2024-10-15T08:26:08.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.034 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:07.034 Verification LBA range: start 0x0 length 0x2000 00:15:07.034 nvme0n1 : 1.02 3890.14 15.20 0.00 0.00 32537.16 7983.48 25261.15 00:15:07.034 [2024-10-15T08:26:08.765Z] =================================================================================================================== 00:15:07.034 [2024-10-15T08:26:08.765Z] Total : 3890.14 15.20 0.00 0.00 32537.16 7983.48 25261.15 00:15:07.034 { 00:15:07.034 "results": [ 00:15:07.034 { 00:15:07.034 "job": "nvme0n1", 00:15:07.034 "core_mask": "0x2", 00:15:07.034 "workload": "verify", 00:15:07.034 "status": "finished", 00:15:07.034 "verify_range": { 00:15:07.034 "start": 0, 00:15:07.034 "length": 8192 00:15:07.034 }, 00:15:07.034 "queue_depth": 128, 00:15:07.034 "io_size": 4096, 00:15:07.034 "runtime": 1.020015, 00:15:07.034 "iops": 3890.138870506806, 00:15:07.034 "mibps": 15.195854962917211, 00:15:07.034 "io_failed": 0, 00:15:07.034 "io_timeout": 0, 00:15:07.034 "avg_latency_us": 32537.164574780058, 00:15:07.034 "min_latency_us": 7983.476363636363, 00:15:07.034 "max_latency_us": 25261.14909090909 00:15:07.034 } 00:15:07.034 ], 00:15:07.034 "core_count": 1 00:15:07.034 } 00:15:07.034 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72922 00:15:07.034 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72922 ']' 00:15:07.034 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72922 00:15:07.034 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:07.034 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:07.034 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72922 00:15:07.293 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:07.293 killing process with pid 72922 00:15:07.293 Received shutdown signal, test time was about 1.000000 seconds 00:15:07.293 00:15:07.293 Latency(us) 00:15:07.293 [2024-10-15T08:26:09.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.293 [2024-10-15T08:26:09.024Z] =================================================================================================================== 00:15:07.293 [2024-10-15T08:26:09.024Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:07.293 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:07.293 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72922' 00:15:07.293 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72922 00:15:07.293 08:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72922 00:15:07.551 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72874 00:15:07.551 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72874 ']' 00:15:07.551 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72874 00:15:07.551 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:07.551 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:07.551 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72874 00:15:07.551 killing process with pid 72874 00:15:07.551 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:07.551 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:07.551 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72874' 00:15:07.551 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72874 00:15:07.551 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72874 00:15:07.809 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:07.809 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:07.809 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:07.809 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.809 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=72971 00:15:07.809 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:07.809 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 72971 00:15:07.809 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72971 ']' 00:15:07.809 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.809 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:07.809 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.809 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:07.809 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.809 [2024-10-15 08:26:09.431369] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:15:07.809 [2024-10-15 08:26:09.431782] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.068 [2024-10-15 08:26:09.572618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.068 [2024-10-15 08:26:09.645209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.068 [2024-10-15 08:26:09.645574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.068 [2024-10-15 08:26:09.645722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.069 [2024-10-15 08:26:09.645879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.069 [2024-10-15 08:26:09.646071] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.069 [2024-10-15 08:26:09.646665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.069 [2024-10-15 08:26:09.719876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:08.327 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:08.327 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:08.327 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:08.327 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:08.327 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.327 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.327 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:15:08.327 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.327 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.327 [2024-10-15 08:26:09.847804] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.327 malloc0 00:15:08.327 [2024-10-15 08:26:09.882272] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:08.327 [2024-10-15 08:26:09.882705] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:08.327 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.327 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72996 00:15:08.327 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:08.327 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72996 /var/tmp/bdevperf.sock 00:15:08.327 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72996 ']' 00:15:08.327 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:08.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:08.327 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:08.327 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:08.327 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:08.327 08:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.327 [2024-10-15 08:26:09.982880] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:15:08.327 [2024-10-15 08:26:09.983307] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72996 ] 00:15:08.586 [2024-10-15 08:26:10.122317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.586 [2024-10-15 08:26:10.195861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.586 [2024-10-15 08:26:10.267418] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:09.522 08:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:09.522 08:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:09.522 08:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KYjIAIRN2h 00:15:09.780 08:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:10.039 [2024-10-15 08:26:11.625915] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:10.039 nvme0n1 00:15:10.039 08:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:10.298 Running I/O for 1 seconds... 00:15:11.233 3851.00 IOPS, 15.04 MiB/s 00:15:11.233 Latency(us) 00:15:11.233 [2024-10-15T08:26:12.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.233 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:11.233 Verification LBA range: start 0x0 length 0x2000 00:15:11.233 nvme0n1 : 1.02 3906.48 15.26 0.00 0.00 32366.86 1243.69 20256.58 00:15:11.233 [2024-10-15T08:26:12.964Z] =================================================================================================================== 00:15:11.233 [2024-10-15T08:26:12.964Z] Total : 3906.48 15.26 0.00 0.00 32366.86 1243.69 20256.58 00:15:11.233 { 00:15:11.233 "results": [ 00:15:11.233 { 00:15:11.233 "job": "nvme0n1", 00:15:11.233 "core_mask": "0x2", 00:15:11.233 "workload": "verify", 00:15:11.233 "status": "finished", 00:15:11.233 "verify_range": { 00:15:11.233 "start": 0, 00:15:11.233 "length": 8192 00:15:11.233 }, 00:15:11.233 "queue_depth": 128, 00:15:11.233 "io_size": 4096, 00:15:11.233 "runtime": 1.018819, 00:15:11.233 "iops": 3906.483879864824, 00:15:11.233 "mibps": 15.259702655721968, 00:15:11.233 "io_failed": 0, 00:15:11.233 "io_timeout": 0, 00:15:11.233 "avg_latency_us": 32366.857706715396, 00:15:11.233 "min_latency_us": 1243.6945454545455, 00:15:11.233 "max_latency_us": 20256.581818181818 00:15:11.233 } 00:15:11.233 ], 00:15:11.233 "core_count": 1 00:15:11.233 } 00:15:11.233 08:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:15:11.233 08:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.233 08:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.492 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.492 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:15:11.492 "subsystems": [ 00:15:11.492 { 00:15:11.492 "subsystem": "keyring", 00:15:11.492 "config": [ 00:15:11.492 { 00:15:11.492 "method": "keyring_file_add_key", 00:15:11.492 "params": { 00:15:11.492 "name": "key0", 00:15:11.492 "path": "/tmp/tmp.KYjIAIRN2h" 00:15:11.492 } 00:15:11.492 } 00:15:11.492 ] 00:15:11.492 }, 00:15:11.492 { 00:15:11.492 "subsystem": "iobuf", 00:15:11.492 "config": [ 00:15:11.492 { 00:15:11.492 "method": "iobuf_set_options", 00:15:11.492 "params": { 00:15:11.492 "small_pool_count": 8192, 00:15:11.492 "large_pool_count": 1024, 00:15:11.492 "small_bufsize": 8192, 00:15:11.492 "large_bufsize": 135168 00:15:11.492 } 00:15:11.492 } 00:15:11.492 ] 00:15:11.492 }, 00:15:11.492 { 00:15:11.492 "subsystem": "sock", 00:15:11.492 "config": [ 00:15:11.492 { 00:15:11.492 "method": "sock_set_default_impl", 00:15:11.492 "params": { 00:15:11.492 "impl_name": "uring" 00:15:11.492 } 00:15:11.492 }, 00:15:11.492 { 00:15:11.492 "method": "sock_impl_set_options", 00:15:11.492 "params": { 00:15:11.492 "impl_name": "ssl", 00:15:11.492 "recv_buf_size": 4096, 00:15:11.492 "send_buf_size": 4096, 00:15:11.492 "enable_recv_pipe": true, 00:15:11.492 "enable_quickack": false, 00:15:11.492 "enable_placement_id": 0, 00:15:11.492 "enable_zerocopy_send_server": true, 00:15:11.492 "enable_zerocopy_send_client": false, 00:15:11.492 "zerocopy_threshold": 0, 00:15:11.492 "tls_version": 0, 00:15:11.492 "enable_ktls": false 00:15:11.492 } 00:15:11.492 }, 00:15:11.492 { 00:15:11.492 "method": "sock_impl_set_options", 00:15:11.492 "params": { 00:15:11.492 "impl_name": "posix", 00:15:11.492 "recv_buf_size": 2097152, 00:15:11.492 "send_buf_size": 2097152, 00:15:11.492 "enable_recv_pipe": true, 00:15:11.492 "enable_quickack": false, 00:15:11.492 "enable_placement_id": 0, 00:15:11.492 "enable_zerocopy_send_server": true, 00:15:11.492 "enable_zerocopy_send_client": false, 00:15:11.492 "zerocopy_threshold": 0, 00:15:11.492 "tls_version": 0, 00:15:11.492 "enable_ktls": false 00:15:11.492 } 00:15:11.492 }, 00:15:11.492 { 00:15:11.492 "method": "sock_impl_set_options", 00:15:11.492 "params": { 00:15:11.492 "impl_name": "uring", 00:15:11.492 "recv_buf_size": 2097152, 00:15:11.492 "send_buf_size": 2097152, 00:15:11.492 "enable_recv_pipe": true, 00:15:11.492 "enable_quickack": false, 00:15:11.492 "enable_placement_id": 0, 00:15:11.492 "enable_zerocopy_send_server": false, 00:15:11.492 "enable_zerocopy_send_client": false, 00:15:11.492 "zerocopy_threshold": 0, 00:15:11.492 "tls_version": 0, 00:15:11.492 "enable_ktls": false 00:15:11.492 } 00:15:11.492 } 00:15:11.492 ] 00:15:11.492 }, 00:15:11.492 { 00:15:11.492 "subsystem": "vmd", 00:15:11.492 "config": [] 00:15:11.492 }, 00:15:11.492 { 00:15:11.492 "subsystem": "accel", 00:15:11.492 "config": [ 00:15:11.492 { 00:15:11.492 "method": "accel_set_options", 00:15:11.492 "params": { 00:15:11.492 "small_cache_size": 128, 00:15:11.492 "large_cache_size": 16, 00:15:11.492 "task_count": 2048, 00:15:11.492 "sequence_count": 2048, 00:15:11.492 "buf_count": 2048 00:15:11.492 } 00:15:11.492 } 00:15:11.492 ] 00:15:11.492 }, 00:15:11.492 { 00:15:11.492 "subsystem": "bdev", 00:15:11.492 "config": [ 00:15:11.492 { 00:15:11.492 "method": "bdev_set_options", 00:15:11.492 "params": { 00:15:11.492 "bdev_io_pool_size": 65535, 00:15:11.492 "bdev_io_cache_size": 256, 00:15:11.492 "bdev_auto_examine": true, 00:15:11.492 "iobuf_small_cache_size": 128, 00:15:11.492 "iobuf_large_cache_size": 16 00:15:11.492 } 00:15:11.492 }, 00:15:11.492 { 00:15:11.492 "method": "bdev_raid_set_options", 00:15:11.492 "params": { 00:15:11.492 "process_window_size_kb": 1024, 00:15:11.492 "process_max_bandwidth_mb_sec": 0 00:15:11.492 } 00:15:11.492 }, 00:15:11.492 { 00:15:11.492 "method": "bdev_iscsi_set_options", 00:15:11.492 "params": { 00:15:11.492 "timeout_sec": 30 00:15:11.492 } 00:15:11.492 }, 00:15:11.492 { 00:15:11.492 "method": "bdev_nvme_set_options", 00:15:11.492 "params": { 00:15:11.492 "action_on_timeout": "none", 00:15:11.492 "timeout_us": 0, 00:15:11.492 "timeout_admin_us": 0, 00:15:11.492 "keep_alive_timeout_ms": 10000, 00:15:11.492 "arbitration_burst": 0, 00:15:11.492 "low_priority_weight": 0, 00:15:11.492 "medium_priority_weight": 0, 00:15:11.492 "high_priority_weight": 0, 00:15:11.492 "nvme_adminq_poll_period_us": 10000, 00:15:11.492 "nvme_ioq_poll_period_us": 0, 00:15:11.492 "io_queue_requests": 0, 00:15:11.492 "delay_cmd_submit": true, 00:15:11.492 "transport_retry_count": 4, 00:15:11.492 "bdev_retry_count": 3, 00:15:11.492 "transport_ack_timeout": 0, 00:15:11.492 "ctrlr_loss_timeout_sec": 0, 00:15:11.492 "reconnect_delay_sec": 0, 00:15:11.492 "fast_io_fail_timeout_sec": 0, 00:15:11.492 "disable_auto_failback": false, 00:15:11.492 "generate_uuids": false, 00:15:11.492 "transport_tos": 0, 00:15:11.492 "nvme_error_stat": false, 00:15:11.492 "rdma_srq_size": 0, 00:15:11.492 "io_path_stat": false, 00:15:11.492 "allow_accel_sequence": false, 00:15:11.492 "rdma_max_cq_size": 0, 00:15:11.492 "rdma_cm_event_timeout_ms": 0, 00:15:11.492 "dhchap_digests": [ 00:15:11.492 "sha256", 00:15:11.492 "sha384", 00:15:11.492 "sha512" 00:15:11.492 ], 00:15:11.492 "dhchap_dhgroups": [ 00:15:11.492 "null", 00:15:11.492 "ffdhe2048", 00:15:11.492 "ffdhe3072", 00:15:11.492 "ffdhe4096", 00:15:11.492 "ffdhe6144", 00:15:11.492 "ffdhe8192" 00:15:11.492 ] 00:15:11.492 } 00:15:11.492 }, 00:15:11.492 { 00:15:11.492 "method": "bdev_nvme_set_hotplug", 00:15:11.492 "params": { 00:15:11.492 "period_us": 100000, 00:15:11.492 "enable": false 00:15:11.492 } 00:15:11.492 }, 00:15:11.492 { 00:15:11.492 "method": "bdev_malloc_create", 00:15:11.492 "params": { 00:15:11.492 "name": "malloc0", 00:15:11.492 "num_blocks": 8192, 00:15:11.492 "block_size": 4096, 00:15:11.492 "physical_block_size": 4096, 00:15:11.492 "uuid": "05582039-4ba3-401a-a164-ce7bae242e1d", 00:15:11.492 "optimal_io_boundary": 0, 00:15:11.492 "md_size": 0, 00:15:11.492 "dif_type": 0, 00:15:11.492 "dif_is_head_of_md": false, 00:15:11.492 "dif_pi_format": 0 00:15:11.492 } 00:15:11.492 }, 00:15:11.492 { 00:15:11.492 "method": "bdev_wait_for_examine" 00:15:11.492 } 00:15:11.492 ] 00:15:11.492 }, 00:15:11.492 { 00:15:11.492 "subsystem": "nbd", 00:15:11.492 "config": [] 00:15:11.492 }, 00:15:11.492 { 00:15:11.492 "subsystem": "scheduler", 00:15:11.492 "config": [ 00:15:11.492 { 00:15:11.492 "method": "framework_set_scheduler", 00:15:11.492 "params": { 00:15:11.492 "name": "static" 00:15:11.492 } 00:15:11.492 } 00:15:11.492 ] 00:15:11.492 }, 00:15:11.492 { 00:15:11.492 "subsystem": "nvmf", 00:15:11.492 "config": [ 00:15:11.492 { 00:15:11.492 "method": "nvmf_set_config", 00:15:11.492 "params": { 00:15:11.492 "discovery_filter": "match_any", 00:15:11.492 "admin_cmd_passthru": { 00:15:11.492 "identify_ctrlr": false 00:15:11.492 }, 00:15:11.492 "dhchap_digests": [ 00:15:11.492 "sha256", 00:15:11.492 "sha384", 00:15:11.492 "sha512" 00:15:11.492 ], 00:15:11.492 "dhchap_dhgroups": [ 00:15:11.492 "null", 00:15:11.492 "ffdhe2048", 00:15:11.492 "ffdhe3072", 00:15:11.492 "ffdhe4096", 00:15:11.492 "ffdhe6144", 00:15:11.492 "ffdhe8192" 00:15:11.492 ] 00:15:11.492 } 00:15:11.492 }, 00:15:11.492 { 00:15:11.492 "method": "nvmf_set_max_subsystems", 00:15:11.492 "params": { 00:15:11.492 "max_subsystems": 1024 00:15:11.492 } 00:15:11.492 }, 00:15:11.492 { 00:15:11.492 "method": "nvmf_set_crdt", 00:15:11.492 "params": { 00:15:11.492 "crdt1": 0, 00:15:11.493 "crdt2": 0, 00:15:11.493 "crdt3": 0 00:15:11.493 } 00:15:11.493 }, 00:15:11.493 { 00:15:11.493 "method": "nvmf_create_transport", 00:15:11.493 "params": { 00:15:11.493 "trtype": "TCP", 00:15:11.493 "max_queue_depth": 128, 00:15:11.493 "max_io_qpairs_per_ctrlr": 127, 00:15:11.493 "in_capsule_data_size": 4096, 00:15:11.493 "max_io_size": 131072, 00:15:11.493 "io_unit_size": 131072, 00:15:11.493 "max_aq_depth": 128, 00:15:11.493 "num_shared_buffers": 511, 00:15:11.493 "buf_cache_size": 4294967295, 00:15:11.493 "dif_insert_or_strip": false, 00:15:11.493 "zcopy": false, 00:15:11.493 "c2h_success": false, 00:15:11.493 "sock_priority": 0, 00:15:11.493 "abort_timeout_sec": 1, 00:15:11.493 "ack_timeout": 0, 00:15:11.493 "data_wr_pool_size": 0 00:15:11.493 } 00:15:11.493 }, 00:15:11.493 { 00:15:11.493 "method": "nvmf_create_subsystem", 00:15:11.493 "params": { 00:15:11.493 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.493 "allow_any_host": false, 00:15:11.493 "serial_number": "00000000000000000000", 00:15:11.493 "model_number": "SPDK bdev Controller", 00:15:11.493 "max_namespaces": 32, 00:15:11.493 "min_cntlid": 1, 00:15:11.493 "max_cntlid": 65519, 00:15:11.493 "ana_reporting": false 00:15:11.493 } 00:15:11.493 }, 00:15:11.493 { 00:15:11.493 "method": "nvmf_subsystem_add_host", 00:15:11.493 "params": { 00:15:11.493 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.493 "host": "nqn.2016-06.io.spdk:host1", 00:15:11.493 "psk": "key0" 00:15:11.493 } 00:15:11.493 }, 00:15:11.493 { 00:15:11.493 "method": "nvmf_subsystem_add_ns", 00:15:11.493 "params": { 00:15:11.493 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.493 "namespace": { 00:15:11.493 "nsid": 1, 00:15:11.493 "bdev_name": "malloc0", 00:15:11.493 "nguid": "055820394BA3401AA164CE7BAE242E1D", 00:15:11.493 "uuid": "05582039-4ba3-401a-a164-ce7bae242e1d", 00:15:11.493 "no_auto_visible": false 00:15:11.493 } 00:15:11.493 } 00:15:11.493 }, 00:15:11.493 { 00:15:11.493 "method": "nvmf_subsystem_add_listener", 00:15:11.493 "params": { 00:15:11.493 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.493 "listen_address": { 00:15:11.493 "trtype": "TCP", 00:15:11.493 "adrfam": "IPv4", 00:15:11.493 "traddr": "10.0.0.3", 00:15:11.493 "trsvcid": "4420" 00:15:11.493 }, 00:15:11.493 "secure_channel": false, 00:15:11.493 "sock_impl": "ssl" 00:15:11.493 } 00:15:11.493 } 00:15:11.493 ] 00:15:11.493 } 00:15:11.493 ] 00:15:11.493 }' 00:15:11.493 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:11.750 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:15:11.750 "subsystems": [ 00:15:11.750 { 00:15:11.750 "subsystem": "keyring", 00:15:11.750 "config": [ 00:15:11.750 { 00:15:11.750 "method": "keyring_file_add_key", 00:15:11.750 "params": { 00:15:11.750 "name": "key0", 00:15:11.750 "path": "/tmp/tmp.KYjIAIRN2h" 00:15:11.750 } 00:15:11.750 } 00:15:11.750 ] 00:15:11.750 }, 00:15:11.750 { 00:15:11.750 "subsystem": "iobuf", 00:15:11.750 "config": [ 00:15:11.750 { 00:15:11.750 "method": "iobuf_set_options", 00:15:11.750 "params": { 00:15:11.750 "small_pool_count": 8192, 00:15:11.750 "large_pool_count": 1024, 00:15:11.750 "small_bufsize": 8192, 00:15:11.751 "large_bufsize": 135168 00:15:11.751 } 00:15:11.751 } 00:15:11.751 ] 00:15:11.751 }, 00:15:11.751 { 00:15:11.751 "subsystem": "sock", 00:15:11.751 "config": [ 00:15:11.751 { 00:15:11.751 "method": "sock_set_default_impl", 00:15:11.751 "params": { 00:15:11.751 "impl_name": "uring" 00:15:11.751 } 00:15:11.751 }, 00:15:11.751 { 00:15:11.751 "method": "sock_impl_set_options", 00:15:11.751 "params": { 00:15:11.751 "impl_name": "ssl", 00:15:11.751 "recv_buf_size": 4096, 00:15:11.751 "send_buf_size": 4096, 00:15:11.751 "enable_recv_pipe": true, 00:15:11.751 "enable_quickack": false, 00:15:11.751 "enable_placement_id": 0, 00:15:11.751 "enable_zerocopy_send_server": true, 00:15:11.751 "enable_zerocopy_send_client": false, 00:15:11.751 "zerocopy_threshold": 0, 00:15:11.751 "tls_version": 0, 00:15:11.751 "enable_ktls": false 00:15:11.751 } 00:15:11.751 }, 00:15:11.751 { 00:15:11.751 "method": "sock_impl_set_options", 00:15:11.751 "params": { 00:15:11.751 "impl_name": "posix", 00:15:11.751 "recv_buf_size": 2097152, 00:15:11.751 "send_buf_size": 2097152, 00:15:11.751 "enable_recv_pipe": true, 00:15:11.751 "enable_quickack": false, 00:15:11.751 "enable_placement_id": 0, 00:15:11.751 "enable_zerocopy_send_server": true, 00:15:11.751 "enable_zerocopy_send_client": false, 00:15:11.751 "zerocopy_threshold": 0, 00:15:11.751 "tls_version": 0, 00:15:11.751 "enable_ktls": false 00:15:11.751 } 00:15:11.751 }, 00:15:11.751 { 00:15:11.751 "method": "sock_impl_set_options", 00:15:11.751 "params": { 00:15:11.751 "impl_name": "uring", 00:15:11.751 "recv_buf_size": 2097152, 00:15:11.751 "send_buf_size": 2097152, 00:15:11.751 "enable_recv_pipe": true, 00:15:11.751 "enable_quickack": false, 00:15:11.751 "enable_placement_id": 0, 00:15:11.751 "enable_zerocopy_send_server": false, 00:15:11.751 "enable_zerocopy_send_client": false, 00:15:11.751 "zerocopy_threshold": 0, 00:15:11.751 "tls_version": 0, 00:15:11.751 "enable_ktls": false 00:15:11.751 } 00:15:11.751 } 00:15:11.751 ] 00:15:11.751 }, 00:15:11.751 { 00:15:11.751 "subsystem": "vmd", 00:15:11.751 "config": [] 00:15:11.751 }, 00:15:11.751 { 00:15:11.751 "subsystem": "accel", 00:15:11.751 "config": [ 00:15:11.751 { 00:15:11.751 "method": "accel_set_options", 00:15:11.751 "params": { 00:15:11.751 "small_cache_size": 128, 00:15:11.751 "large_cache_size": 16, 00:15:11.751 "task_count": 2048, 00:15:11.751 "sequence_count": 2048, 00:15:11.751 "buf_count": 2048 00:15:11.751 } 00:15:11.751 } 00:15:11.751 ] 00:15:11.751 }, 00:15:11.751 { 00:15:11.751 "subsystem": "bdev", 00:15:11.751 "config": [ 00:15:11.751 { 00:15:11.751 "method": "bdev_set_options", 00:15:11.751 "params": { 00:15:11.751 "bdev_io_pool_size": 65535, 00:15:11.751 "bdev_io_cache_size": 256, 00:15:11.751 "bdev_auto_examine": true, 00:15:11.751 "iobuf_small_cache_size": 128, 00:15:11.751 "iobuf_large_cache_size": 16 00:15:11.751 } 00:15:11.751 }, 00:15:11.751 { 00:15:11.751 "method": "bdev_raid_set_options", 00:15:11.751 "params": { 00:15:11.751 "process_window_size_kb": 1024, 00:15:11.751 "process_max_bandwidth_mb_sec": 0 00:15:11.751 } 00:15:11.751 }, 00:15:11.751 { 00:15:11.751 "method": "bdev_iscsi_set_options", 00:15:11.751 "params": { 00:15:11.751 "timeout_sec": 30 00:15:11.751 } 00:15:11.751 }, 00:15:11.751 { 00:15:11.751 "method": "bdev_nvme_set_options", 00:15:11.751 "params": { 00:15:11.751 "action_on_timeout": "none", 00:15:11.751 "timeout_us": 0, 00:15:11.751 "timeout_admin_us": 0, 00:15:11.751 "keep_alive_timeout_ms": 10000, 00:15:11.751 "arbitration_burst": 0, 00:15:11.751 "low_priority_weight": 0, 00:15:11.751 "medium_priority_weight": 0, 00:15:11.751 "high_priority_weight": 0, 00:15:11.751 "nvme_adminq_poll_period_us": 10000, 00:15:11.751 "nvme_ioq_poll_period_us": 0, 00:15:11.751 "io_queue_requests": 512, 00:15:11.751 "delay_cmd_submit": true, 00:15:11.751 "transport_retry_count": 4, 00:15:11.751 "bdev_retry_count": 3, 00:15:11.751 "transport_ack_timeout": 0, 00:15:11.751 "ctrlr_loss_timeout_sec": 0, 00:15:11.751 "reconnect_delay_sec": 0, 00:15:11.751 "fast_io_fail_timeout_sec": 0, 00:15:11.751 "disable_auto_failback": false, 00:15:11.751 "generate_uuids": false, 00:15:11.751 "transport_tos": 0, 00:15:11.751 "nvme_error_stat": false, 00:15:11.751 "rdma_srq_size": 0, 00:15:11.751 "io_path_stat": false, 00:15:11.751 "allow_accel_sequence": false, 00:15:11.751 "rdma_max_cq_size": 0, 00:15:11.751 "rdma_cm_event_timeout_ms": 0, 00:15:11.751 "dhchap_digests": [ 00:15:11.751 "sha256", 00:15:11.751 "sha384", 00:15:11.751 "sha512" 00:15:11.751 ], 00:15:11.751 "dhchap_dhgroups": [ 00:15:11.751 "null", 00:15:11.751 "ffdhe2048", 00:15:11.751 "ffdhe3072", 00:15:11.751 "ffdhe4096", 00:15:11.751 "ffdhe6144", 00:15:11.751 "ffdhe8192" 00:15:11.751 ] 00:15:11.751 } 00:15:11.751 }, 00:15:11.751 { 00:15:11.751 "method": "bdev_nvme_attach_controller", 00:15:11.751 "params": { 00:15:11.751 "name": "nvme0", 00:15:11.751 "trtype": "TCP", 00:15:11.751 "adrfam": "IPv4", 00:15:11.751 "traddr": "10.0.0.3", 00:15:11.751 "trsvcid": "4420", 00:15:11.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.751 "prchk_reftag": false, 00:15:11.751 "prchk_guard": false, 00:15:11.751 "ctrlr_loss_timeout_sec": 0, 00:15:11.751 "reconnect_delay_sec": 0, 00:15:11.751 "fast_io_fail_timeout_sec": 0, 00:15:11.751 "psk": "key0", 00:15:11.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:11.751 "hdgst": false, 00:15:11.751 "ddgst": false, 00:15:11.751 "multipath": "multipath" 00:15:11.751 } 00:15:11.751 }, 00:15:11.751 { 00:15:11.751 "method": "bdev_nvme_set_hotplug", 00:15:11.751 "params": { 00:15:11.751 "period_us": 100000, 00:15:11.751 "enable": false 00:15:11.751 } 00:15:11.751 }, 00:15:11.751 { 00:15:11.751 "method": "bdev_enable_histogram", 00:15:11.751 "params": { 00:15:11.751 "name": "nvme0n1", 00:15:11.751 "enable": true 00:15:11.751 } 00:15:11.751 }, 00:15:11.751 { 00:15:11.751 "method": "bdev_wait_for_examine" 00:15:11.751 } 00:15:11.751 ] 00:15:11.751 }, 00:15:11.751 { 00:15:11.751 "subsystem": "nbd", 00:15:11.751 "config": [] 00:15:11.751 } 00:15:11.751 ] 00:15:11.751 }' 00:15:11.751 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72996 00:15:11.751 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72996 ']' 00:15:11.751 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72996 00:15:11.751 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:11.751 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:11.751 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72996 00:15:11.751 killing process with pid 72996 00:15:11.751 Received shutdown signal, test time was about 1.000000 seconds 00:15:11.751 00:15:11.751 Latency(us) 00:15:11.751 [2024-10-15T08:26:13.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.751 [2024-10-15T08:26:13.482Z] =================================================================================================================== 00:15:11.751 [2024-10-15T08:26:13.482Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:11.751 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:11.751 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:11.751 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72996' 00:15:11.751 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72996 00:15:11.751 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72996 00:15:12.010 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72971 00:15:12.010 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72971 ']' 00:15:12.010 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72971 00:15:12.010 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:12.010 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:12.010 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72971 00:15:12.010 killing process with pid 72971 00:15:12.010 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:12.010 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:12.010 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72971' 00:15:12.010 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72971 00:15:12.010 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72971 00:15:12.268 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:15:12.268 "subsystems": [ 00:15:12.268 { 00:15:12.268 "subsystem": "keyring", 00:15:12.268 "config": [ 00:15:12.268 { 00:15:12.268 "method": "keyring_file_add_key", 00:15:12.268 "params": { 00:15:12.269 "name": "key0", 00:15:12.269 "path": "/tmp/tmp.KYjIAIRN2h" 00:15:12.269 } 00:15:12.269 } 00:15:12.269 ] 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "subsystem": "iobuf", 00:15:12.269 "config": [ 00:15:12.269 { 00:15:12.269 "method": "iobuf_set_options", 00:15:12.269 "params": { 00:15:12.269 "small_pool_count": 8192, 00:15:12.269 "large_pool_count": 1024, 00:15:12.269 "small_bufsize": 8192, 00:15:12.269 "large_bufsize": 135168 00:15:12.269 } 00:15:12.269 } 00:15:12.269 ] 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "subsystem": "sock", 00:15:12.269 "config": [ 00:15:12.269 { 00:15:12.269 "method": "sock_set_default_impl", 00:15:12.269 "params": { 00:15:12.269 "impl_name": "uring" 00:15:12.269 } 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "method": "sock_impl_set_options", 00:15:12.269 "params": { 00:15:12.269 "impl_name": "ssl", 00:15:12.269 "recv_buf_size": 4096, 00:15:12.269 "send_buf_size": 4096, 00:15:12.269 "enable_recv_pipe": true, 00:15:12.269 "enable_quickack": false, 00:15:12.269 "enable_placement_id": 0, 00:15:12.269 "enable_zerocopy_send_server": true, 00:15:12.269 "enable_zerocopy_send_client": false, 00:15:12.269 "zerocopy_threshold": 0, 00:15:12.269 "tls_version": 0, 00:15:12.269 "enable_ktls": false 00:15:12.269 } 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "method": "sock_impl_set_options", 00:15:12.269 "params": { 00:15:12.269 "impl_name": "posix", 00:15:12.269 "recv_buf_size": 2097152, 00:15:12.269 "send_buf_size": 2097152, 00:15:12.269 "enable_recv_pipe": true, 00:15:12.269 "enable_quickack": false, 00:15:12.269 "enable_placement_id": 0, 00:15:12.269 "enable_zerocopy_send_server": true, 00:15:12.269 "enable_zerocopy_send_client": false, 00:15:12.269 "zerocopy_threshold": 0, 00:15:12.269 "tls_version": 0, 00:15:12.269 "enable_ktls": false 00:15:12.269 } 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "method": "sock_impl_set_options", 00:15:12.269 "params": { 00:15:12.269 "impl_name": "uring", 00:15:12.269 "recv_buf_size": 2097152, 00:15:12.269 "send_buf_size": 2097152, 00:15:12.269 "enable_recv_pipe": true, 00:15:12.269 "enable_quickack": false, 00:15:12.269 "enable_placement_id": 0, 00:15:12.269 "enable_zerocopy_send_server": false, 00:15:12.269 "enable_zerocopy_send_client": false, 00:15:12.269 "zerocopy_threshold": 0, 00:15:12.269 "tls_version": 0, 00:15:12.269 "enable_ktls": false 00:15:12.269 } 00:15:12.269 } 00:15:12.269 ] 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "subsystem": "vmd", 00:15:12.269 "config": [] 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "subsystem": "accel", 00:15:12.269 "config": [ 00:15:12.269 { 00:15:12.269 "method": "accel_set_options", 00:15:12.269 "params": { 00:15:12.269 "small_cache_size": 128, 00:15:12.269 "large_cache_size": 16, 00:15:12.269 "task_count": 2048, 00:15:12.269 "sequence_count": 2048, 00:15:12.269 "buf_count": 2048 00:15:12.269 } 00:15:12.269 } 00:15:12.269 ] 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "subsystem": "bdev", 00:15:12.269 "config": [ 00:15:12.269 { 00:15:12.269 "method": "bdev_set_options", 00:15:12.269 "params": { 00:15:12.269 "bdev_io_pool_size": 65535, 00:15:12.269 "bdev_io_cache_size": 256, 00:15:12.269 "bdev_auto_examine": true, 00:15:12.269 "iobuf_small_cache_size": 128, 00:15:12.269 "iobuf_large_cache_size": 16 00:15:12.269 } 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "method": "bdev_raid_set_options", 00:15:12.269 "params": { 00:15:12.269 "process_window_size_kb": 1024, 00:15:12.269 "process_max_bandwidth_mb_sec": 0 00:15:12.269 } 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "method": "bdev_iscsi_set_options", 00:15:12.269 "params": { 00:15:12.269 "timeout_sec": 30 00:15:12.269 } 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "method": "bdev_nvme_set_options", 00:15:12.269 "params": { 00:15:12.269 "action_on_timeout": "none", 00:15:12.269 "timeout_us": 0, 00:15:12.269 "timeout_admin_us": 0, 00:15:12.269 "keep_alive_timeout_ms": 10000, 00:15:12.269 "arbitration_burst": 0, 00:15:12.269 "low_priority_weight": 0, 00:15:12.269 "medium_priority_weight": 0, 00:15:12.269 "high_priority_weight": 0, 00:15:12.269 "nvme_adminq_poll_period_us": 10000, 00:15:12.269 "nvme_ioq_poll_period_us": 0, 00:15:12.269 "io_queue_requests": 0, 00:15:12.269 "delay_cmd_submit": true, 00:15:12.269 "transport_retry_count": 4, 00:15:12.269 "bdev_retry_count": 3, 00:15:12.269 "transport_ack_timeout": 0, 00:15:12.269 "ctrlr_loss_timeout_sec": 0, 00:15:12.269 "reconnect_delay_sec": 0, 00:15:12.269 "fast_io_fail_timeout_sec": 0, 00:15:12.269 "disable_auto_failback": false, 00:15:12.269 "generate_uuids": false, 00:15:12.269 "transport_tos": 0, 00:15:12.269 "nvme_error_stat": false, 00:15:12.269 "rdma_srq_size": 0, 00:15:12.269 "io_path_stat": false, 00:15:12.269 "allow_accel_sequence": false, 00:15:12.269 "rdma_max_cq_size": 0, 00:15:12.269 "rdma_cm_event_timeout_ms": 0, 00:15:12.269 "dhchap_digests": [ 00:15:12.269 "sha256", 00:15:12.269 "sha384", 00:15:12.269 "sha512" 00:15:12.269 ], 00:15:12.269 "dhchap_dhgroups": [ 00:15:12.269 "null", 00:15:12.269 "ffdhe2048", 00:15:12.269 "ffdhe3072", 00:15:12.269 "ffdhe4096", 00:15:12.269 "ffdhe6144", 00:15:12.269 "ffdhe8192" 00:15:12.269 ] 00:15:12.269 } 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "method": "bdev_nvme_set_hotplug", 00:15:12.269 "params": { 00:15:12.269 "period_us": 100000, 00:15:12.269 "enable": false 00:15:12.269 } 00:15:12.269 }, 00:15:12.269 { 00:15:12.269 "method": "bdev_malloc_create", 00:15:12.269 "params": { 00:15:12.269 "name": "malloc0", 00:15:12.270 "num_blocks": 8192, 00:15:12.270 "block_size": 4096, 00:15:12.270 "physical_block_size": 4096, 00:15:12.270 "uuid": "05582039-4ba3-401a-a164-ce7bae242e1d", 00:15:12.270 "optimal_io_boundary": 0, 00:15:12.270 "md_size": 0, 00:15:12.270 "dif_type": 0, 00:15:12.270 "dif_is_head_of_md": false, 00:15:12.270 "dif_pi_format": 0 00:15:12.270 } 00:15:12.270 }, 00:15:12.270 { 00:15:12.270 "method": "bdev_wait_for_examine" 00:15:12.270 } 00:15:12.270 ] 00:15:12.270 }, 00:15:12.270 { 00:15:12.270 "subsystem": "nbd", 00:15:12.270 "config": [] 00:15:12.270 }, 00:15:12.270 { 00:15:12.270 "subsystem": "scheduler", 00:15:12.270 "config": [ 00:15:12.270 { 00:15:12.270 "method": "framework_set_scheduler", 00:15:12.270 "params": { 00:15:12.270 "name": "static" 00:15:12.270 } 00:15:12.270 } 00:15:12.270 ] 00:15:12.270 }, 00:15:12.270 { 00:15:12.270 "subsystem": "nvmf", 00:15:12.270 "config": [ 00:15:12.270 { 00:15:12.270 "method": "nvmf_set_config", 00:15:12.270 "params": { 00:15:12.270 "discovery_filter": "match_any", 00:15:12.270 "admin_cmd_passthru": { 00:15:12.270 "identify_ctrlr": false 00:15:12.270 }, 00:15:12.270 "dhchap_digests": [ 00:15:12.270 "sha256", 00:15:12.270 "sha384", 00:15:12.270 "sha512" 00:15:12.270 ], 00:15:12.270 "dhchap_dhgroups": [ 00:15:12.270 "null", 00:15:12.270 "ffdhe2048", 00:15:12.270 "ffdhe3072", 00:15:12.270 "ffdhe4096", 00:15:12.270 "ffdhe6144", 00:15:12.270 "ffdhe8192" 00:15:12.270 ] 00:15:12.270 } 00:15:12.270 }, 00:15:12.270 { 00:15:12.270 "method": "nvmf_set_max_subsystems", 00:15:12.270 "params": { 00:15:12.270 "max_ 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:15:12.270 subsystems": 1024 00:15:12.270 } 00:15:12.270 }, 00:15:12.270 { 00:15:12.270 "method": "nvmf_set_crdt", 00:15:12.270 "params": { 00:15:12.270 "crdt1": 0, 00:15:12.270 "crdt2": 0, 00:15:12.270 "crdt3": 0 00:15:12.270 } 00:15:12.270 }, 00:15:12.270 { 00:15:12.270 "method": "nvmf_create_transport", 00:15:12.270 "params": { 00:15:12.270 "trtype": "TCP", 00:15:12.270 "max_queue_depth": 128, 00:15:12.270 "max_io_qpairs_per_ctrlr": 127, 00:15:12.270 "in_capsule_data_size": 4096, 00:15:12.270 "max_io_size": 131072, 00:15:12.270 "io_unit_size": 131072, 00:15:12.270 "max_aq_depth": 128, 00:15:12.270 "num_shared_buffers": 511, 00:15:12.270 "buf_cache_size": 4294967295, 00:15:12.270 "dif_insert_or_strip": false, 00:15:12.270 "zcopy": false, 00:15:12.270 "c2h_success": false, 00:15:12.270 "sock_priority": 0, 00:15:12.270 "abort_timeout_sec": 1, 00:15:12.270 "ack_timeout": 0, 00:15:12.270 "data_wr_pool_size": 0 00:15:12.270 } 00:15:12.270 }, 00:15:12.270 { 00:15:12.270 "method": "nvmf_create_subsystem", 00:15:12.270 "params": { 00:15:12.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:12.270 "allow_any_host": false, 00:15:12.270 "serial_number": "00000000000000000000", 00:15:12.270 "model_number": "SPDK bdev Controller", 00:15:12.270 "max_namespaces": 32, 00:15:12.270 "min_cntlid": 1, 00:15:12.270 "max_cntlid": 65519, 00:15:12.270 "ana_reporting": false 00:15:12.270 } 00:15:12.270 }, 00:15:12.270 { 00:15:12.270 "method": "nvmf_subsystem_add_host", 00:15:12.270 "params": { 00:15:12.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:12.270 "host": "nqn.2016-06.io.spdk:host1", 00:15:12.270 "psk": "key0" 00:15:12.270 } 00:15:12.270 }, 00:15:12.270 { 00:15:12.270 "method": "nvmf_subsystem_add_ns", 00:15:12.270 "params": { 00:15:12.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:12.270 "namespace": { 00:15:12.270 "nsid": 1, 00:15:12.270 "bdev_name": "malloc0", 00:15:12.270 "nguid": "055820394BA3401AA164CE7BAE242E1D", 00:15:12.270 "uuid": "05582039-4ba3-401a-a164-ce7bae242e1d", 00:15:12.270 "no_auto_visible": false 00:15:12.270 } 00:15:12.270 } 00:15:12.270 }, 00:15:12.270 { 00:15:12.270 "method": "nvmf_subsystem_add_listener", 00:15:12.270 "params": { 00:15:12.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:12.270 "listen_address": { 00:15:12.270 "trtype": "TCP", 00:15:12.270 "adrfam": "IPv4", 00:15:12.270 "traddr": "10.0.0.3", 00:15:12.270 "trsvcid": "4420" 00:15:12.270 }, 00:15:12.270 "secure_channel": false, 00:15:12.270 "sock_impl": "ssl" 00:15:12.270 } 00:15:12.270 } 00:15:12.270 ] 00:15:12.270 } 00:15:12.270 ] 00:15:12.270 }' 00:15:12.270 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:12.270 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:12.270 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:12.270 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=73062 00:15:12.270 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:12.270 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 73062 00:15:12.270 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73062 ']' 00:15:12.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.270 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.270 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:12.270 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.270 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:12.270 08:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:12.529 [2024-10-15 08:26:14.061885] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:15:12.529 [2024-10-15 08:26:14.062025] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.529 [2024-10-15 08:26:14.201250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.788 [2024-10-15 08:26:14.271807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.788 [2024-10-15 08:26:14.271893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.788 [2024-10-15 08:26:14.271923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.788 [2024-10-15 08:26:14.271932] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.788 [2024-10-15 08:26:14.271940] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.788 [2024-10-15 08:26:14.272487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.788 [2024-10-15 08:26:14.459968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:13.047 [2024-10-15 08:26:14.553938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.047 [2024-10-15 08:26:14.585851] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:13.047 [2024-10-15 08:26:14.586163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:13.613 08:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:13.613 08:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:13.613 08:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:13.613 08:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:13.613 08:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:13.613 08:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.613 08:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=73094 00:15:13.613 08:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 73094 /var/tmp/bdevperf.sock 00:15:13.613 08:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:13.613 08:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73094 ']' 00:15:13.613 08:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.613 08:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:13.613 08:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:15:13.613 "subsystems": [ 00:15:13.613 { 00:15:13.613 "subsystem": "keyring", 00:15:13.613 "config": [ 00:15:13.613 { 00:15:13.613 "method": "keyring_file_add_key", 00:15:13.613 "params": { 00:15:13.613 "name": "key0", 00:15:13.613 "path": "/tmp/tmp.KYjIAIRN2h" 00:15:13.613 } 00:15:13.613 } 00:15:13.613 ] 00:15:13.613 }, 00:15:13.613 { 00:15:13.613 "subsystem": "iobuf", 00:15:13.613 "config": [ 00:15:13.613 { 00:15:13.613 "method": "iobuf_set_options", 00:15:13.613 "params": { 00:15:13.613 "small_pool_count": 8192, 00:15:13.613 "large_pool_count": 1024, 00:15:13.613 "small_bufsize": 8192, 00:15:13.613 "large_bufsize": 135168 00:15:13.613 } 00:15:13.613 } 00:15:13.613 ] 00:15:13.613 }, 00:15:13.613 { 00:15:13.613 "subsystem": "sock", 00:15:13.613 "config": [ 00:15:13.613 { 00:15:13.613 "method": "sock_set_default_impl", 00:15:13.613 "params": { 00:15:13.613 "impl_name": "uring" 00:15:13.613 } 00:15:13.613 }, 00:15:13.613 { 00:15:13.613 "method": "sock_impl_set_options", 00:15:13.613 "params": { 00:15:13.613 "impl_name": "ssl", 00:15:13.613 "recv_buf_size": 4096, 00:15:13.613 "send_buf_size": 4096, 00:15:13.613 "enable_recv_pipe": true, 00:15:13.613 "enable_quickack": false, 00:15:13.613 "enable_placement_id": 0, 00:15:13.613 "enable_zerocopy_send_server": true, 00:15:13.613 "enable_zerocopy_send_client": false, 00:15:13.613 "zerocopy_threshold": 0, 00:15:13.613 "tls_version": 0, 00:15:13.613 "enable_ktls": false 00:15:13.613 } 00:15:13.613 }, 00:15:13.613 { 00:15:13.613 "method": "sock_impl_set_options", 00:15:13.613 "params": { 00:15:13.613 "impl_name": "posix", 00:15:13.613 "recv_buf_size": 2097152, 00:15:13.613 "send_buf_size": 2097152, 00:15:13.613 "enable_recv_pipe": true, 00:15:13.613 "enable_quickack": false, 00:15:13.613 "enable_placement_id": 0, 00:15:13.613 "enable_zerocopy_send_server": true, 00:15:13.613 "enable_zerocopy_send_client": false, 00:15:13.613 "zerocopy_threshold": 0, 00:15:13.613 "tls_version": 0, 00:15:13.613 "enable_ktls": false 00:15:13.613 } 00:15:13.613 }, 00:15:13.613 { 00:15:13.613 "method": "sock_impl_set_options", 00:15:13.613 "params": { 00:15:13.613 "impl_name": "uring", 00:15:13.613 "recv_buf_size": 2097152, 00:15:13.613 "send_buf_size": 2097152, 00:15:13.613 "enable_recv_pipe": true, 00:15:13.613 "enable_quickack": false, 00:15:13.613 "enable_placement_id": 0, 00:15:13.613 "enable_zerocopy_send_server": false, 00:15:13.613 "enable_zerocopy_send_client": false, 00:15:13.613 "zerocopy_threshold": 0, 00:15:13.613 "tls_version": 0, 00:15:13.613 "enable_ktls": false 00:15:13.613 } 00:15:13.613 } 00:15:13.613 ] 00:15:13.613 }, 00:15:13.613 { 00:15:13.613 "subsystem": "vmd", 00:15:13.613 "config": [] 00:15:13.613 }, 00:15:13.613 { 00:15:13.613 "subsystem": "accel", 00:15:13.613 "config": [ 00:15:13.613 { 00:15:13.613 "method": "accel_set_options", 00:15:13.613 "params": { 00:15:13.613 "small_cache_size": 128, 00:15:13.613 "large_cache_size": 16, 00:15:13.613 "task_count": 2048, 00:15:13.613 "sequence_count": 2048, 00:15:13.613 "buf_count": 2048 00:15:13.613 } 00:15:13.613 } 00:15:13.613 ] 00:15:13.613 }, 00:15:13.613 { 00:15:13.613 "subsystem": "bdev", 00:15:13.613 "config": [ 00:15:13.613 { 00:15:13.613 "method": "bdev_set_options", 00:15:13.613 "params": { 00:15:13.613 "bdev_io_pool_size": 65535, 00:15:13.613 "bdev_io_cache_size": 256, 00:15:13.613 "bdev_auto_examine": true, 00:15:13.613 "iobuf_small_cache_size": 128, 00:15:13.613 "iobuf_large_cache_size": 16 00:15:13.613 } 00:15:13.613 }, 00:15:13.613 { 00:15:13.613 "method": "bdev_raid_set_options", 00:15:13.613 "params": { 00:15:13.613 "process_window_size_kb": 1024, 00:15:13.613 "process_max_bandwidth_mb_sec": 0 00:15:13.613 } 00:15:13.613 }, 00:15:13.613 { 00:15:13.613 "method": "bdev_iscsi_set_options", 00:15:13.613 "params": { 00:15:13.613 "timeout_sec": 30 00:15:13.613 } 00:15:13.613 }, 00:15:13.613 { 00:15:13.613 "method": "bdev_nvme_set_options", 00:15:13.613 "params": { 00:15:13.613 "action_on_timeout": "none", 00:15:13.613 "timeout_us": 0, 00:15:13.613 "timeout_admin_us": 0, 00:15:13.613 "keep_alive_timeout_ms": 10000, 00:15:13.613 "arbitration_burst": 0, 00:15:13.613 "low_priority_weight": 0, 00:15:13.613 "medium_priority_weight": 0, 00:15:13.613 "high_priority_weight": 0, 00:15:13.613 "nvme_adminq_poll_period_us": 10000, 00:15:13.613 "nvme_ioq_poll_period_us": 0, 00:15:13.613 "io_queue_requests": 512, 00:15:13.613 "delay_cmd_submit": true, 00:15:13.613 "transport_retry_count": 4, 00:15:13.613 "bdev_retry_count": 3, 00:15:13.613 "transport_ack_timeout": 0, 00:15:13.613 "ctrlr_loss_timeout_sec": 0, 00:15:13.613 "reconnect_delay_sec": 0, 00:15:13.613 "fast_io_fail_timeout_sec": 0, 00:15:13.613 "disable_auto_failback": false, 00:15:13.613 "generate_uuids": false, 00:15:13.613 "transport_tos": 0, 00:15:13.613 "nvme_error_stat": false, 00:15:13.613 "rdma_srq_size": 0, 00:15:13.613 "io_path_stat": false, 00:15:13.613 "allow_accel_sequence": false, 00:15:13.613 "rdma_max_cq_size": 0, 00:15:13.613 "rdma_cm_event_timeout_ms": 0, 00:15:13.613 "dhchap_digests": [ 00:15:13.613 "sha256", 00:15:13.613 "sha384", 00:15:13.613 "sha512" 00:15:13.613 ], 00:15:13.613 "dhchap_dhgroups": [ 00:15:13.613 "null", 00:15:13.613 "ffdhe2048", 00:15:13.613 "ffdhe3072", 00:15:13.613 "ffdhe4096", 00:15:13.613 "ffdhe6144", 00:15:13.613 "ffdhe8192" 00:15:13.613 ] 00:15:13.613 } 00:15:13.613 }, 00:15:13.613 { 00:15:13.613 "method": "bdev_nvme_attach_controller", 00:15:13.613 "params": { 00:15:13.613 "name": "nvme0", 00:15:13.613 "trtype": "TCP", 00:15:13.613 "adrfam": "IPv4", 00:15:13.613 "traddr": "10.0.0.3", 00:15:13.613 "trsvcid": "4420", 00:15:13.613 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:13.613 "prchk_reftag": false, 00:15:13.613 "prchk_guard": false, 00:15:13.613 "ctrlr_loss_timeout_sec": 0, 00:15:13.613 "reconnect_delay_sec": 0, 00:15:13.613 "fast_io_fail_timeout_sec": 0, 00:15:13.613 "psk": "key0", 00:15:13.613 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:13.613 "hdgst": false, 00:15:13.613 "ddgst": false, 00:15:13.613 "multipath": "multipath" 00:15:13.613 } 00:15:13.613 }, 00:15:13.613 { 00:15:13.613 "method": "bdev_nvme_set_hotplug", 00:15:13.613 "params": { 00:15:13.613 "period_us": 100000, 00:15:13.613 "enable": false 00:15:13.613 } 00:15:13.613 }, 00:15:13.613 { 00:15:13.613 "method": "bdev_enable_histogram", 00:15:13.613 "params": { 00:15:13.613 "name": "nvme0n1", 00:15:13.613 "enable": true 00:15:13.613 } 00:15:13.613 }, 00:15:13.613 { 00:15:13.613 "method": "bdev_wait_for_examine" 00:15:13.613 } 00:15:13.613 ] 00:15:13.613 }, 00:15:13.613 { 00:15:13.613 "subsystem": "nbd", 00:15:13.613 "config": [] 00:15:13.613 } 00:15:13.613 ] 00:15:13.613 }' 00:15:13.613 08:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.613 08:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:13.613 08:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:13.613 [2024-10-15 08:26:15.165412] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:15:13.613 [2024-10-15 08:26:15.165521] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73094 ] 00:15:13.613 [2024-10-15 08:26:15.301330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.872 [2024-10-15 08:26:15.381452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.872 [2024-10-15 08:26:15.537305] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:13.872 [2024-10-15 08:26:15.600420] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:14.846 08:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:14.846 08:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:14.846 08:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:14.846 08:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:15:14.846 08:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.846 08:26:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:15.128 Running I/O for 1 seconds... 00:15:16.065 3829.00 IOPS, 14.96 MiB/s 00:15:16.065 Latency(us) 00:15:16.065 [2024-10-15T08:26:17.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.065 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:16.065 Verification LBA range: start 0x0 length 0x2000 00:15:16.065 nvme0n1 : 1.03 3840.18 15.00 0.00 0.00 32941.09 7506.85 21209.83 00:15:16.065 [2024-10-15T08:26:17.796Z] =================================================================================================================== 00:15:16.065 [2024-10-15T08:26:17.796Z] Total : 3840.18 15.00 0.00 0.00 32941.09 7506.85 21209.83 00:15:16.065 { 00:15:16.065 "results": [ 00:15:16.065 { 00:15:16.065 "job": "nvme0n1", 00:15:16.065 "core_mask": "0x2", 00:15:16.065 "workload": "verify", 00:15:16.065 "status": "finished", 00:15:16.065 "verify_range": { 00:15:16.065 "start": 0, 00:15:16.065 "length": 8192 00:15:16.065 }, 00:15:16.065 "queue_depth": 128, 00:15:16.065 "io_size": 4096, 00:15:16.065 "runtime": 1.03068, 00:15:16.065 "iops": 3840.1831800364807, 00:15:16.065 "mibps": 15.000715547017503, 00:15:16.065 "io_failed": 0, 00:15:16.065 "io_timeout": 0, 00:15:16.065 "avg_latency_us": 32941.09358445496, 00:15:16.065 "min_latency_us": 7506.850909090909, 00:15:16.065 "max_latency_us": 21209.832727272726 00:15:16.065 } 00:15:16.065 ], 00:15:16.065 "core_count": 1 00:15:16.065 } 00:15:16.065 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:15:16.065 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:15:16.065 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:16.065 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:15:16.065 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:15:16.065 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:15:16.065 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:16.065 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:15:16.065 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:15:16.065 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:15:16.065 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:16.065 nvmf_trace.0 00:15:16.327 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:15:16.327 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 73094 00:15:16.327 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73094 ']' 00:15:16.327 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73094 00:15:16.327 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:16.327 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:16.327 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73094 00:15:16.327 killing process with pid 73094 00:15:16.327 Received shutdown signal, test time was about 1.000000 seconds 00:15:16.327 00:15:16.327 Latency(us) 00:15:16.327 [2024-10-15T08:26:18.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.327 [2024-10-15T08:26:18.058Z] =================================================================================================================== 00:15:16.327 [2024-10-15T08:26:18.058Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:16.327 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:16.327 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:16.327 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73094' 00:15:16.327 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73094 00:15:16.327 08:26:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73094 00:15:16.587 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:16.587 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:16.587 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:15:16.587 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:16.587 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:15:16.587 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:16.587 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:16.587 rmmod nvme_tcp 00:15:16.587 rmmod nvme_fabrics 00:15:16.587 rmmod nvme_keyring 00:15:16.587 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:16.587 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:15:16.587 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:15:16.587 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 73062 ']' 00:15:16.587 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 73062 00:15:16.587 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73062 ']' 00:15:16.587 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73062 00:15:16.587 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:16.587 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:16.587 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73062 00:15:16.587 killing process with pid 73062 00:15:16.587 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:16.587 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:16.587 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73062' 00:15:16.587 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73062 00:15:16.587 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73062 00:15:16.845 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:16.845 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:16.845 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:16.845 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:15:16.845 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:16.845 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:15:16.845 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:15:16.845 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:16.845 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:16.845 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:16.845 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:16.845 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:16.845 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:17.104 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:17.104 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:17.104 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:17.104 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:17.104 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:17.104 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:17.104 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:17.104 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:17.104 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:17.104 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:17.104 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.104 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:17.104 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.104 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:15:17.104 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.2C2HCzdWcZ /tmp/tmp.1EuBd6YXwq /tmp/tmp.KYjIAIRN2h 00:15:17.104 00:15:17.104 real 1m29.012s 00:15:17.104 user 2m23.517s 00:15:17.104 sys 0m28.671s 00:15:17.104 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:17.104 ************************************ 00:15:17.104 END TEST nvmf_tls 00:15:17.104 ************************************ 00:15:17.104 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:17.104 08:26:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:17.104 08:26:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:17.104 08:26:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:17.104 08:26:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:17.104 ************************************ 00:15:17.104 START TEST nvmf_fips 00:15:17.104 ************************************ 00:15:17.104 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:17.364 * Looking for test storage... 00:15:17.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:17.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.364 --rc genhtml_branch_coverage=1 00:15:17.364 --rc genhtml_function_coverage=1 00:15:17.364 --rc genhtml_legend=1 00:15:17.364 --rc geninfo_all_blocks=1 00:15:17.364 --rc geninfo_unexecuted_blocks=1 00:15:17.364 00:15:17.364 ' 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:17.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.364 --rc genhtml_branch_coverage=1 00:15:17.364 --rc genhtml_function_coverage=1 00:15:17.364 --rc genhtml_legend=1 00:15:17.364 --rc geninfo_all_blocks=1 00:15:17.364 --rc geninfo_unexecuted_blocks=1 00:15:17.364 00:15:17.364 ' 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:17.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.364 --rc genhtml_branch_coverage=1 00:15:17.364 --rc genhtml_function_coverage=1 00:15:17.364 --rc genhtml_legend=1 00:15:17.364 --rc geninfo_all_blocks=1 00:15:17.364 --rc geninfo_unexecuted_blocks=1 00:15:17.364 00:15:17.364 ' 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:17.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.364 --rc genhtml_branch_coverage=1 00:15:17.364 --rc genhtml_function_coverage=1 00:15:17.364 --rc genhtml_legend=1 00:15:17.364 --rc geninfo_all_blocks=1 00:15:17.364 --rc geninfo_unexecuted_blocks=1 00:15:17.364 00:15:17.364 ' 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.364 08:26:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.364 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.364 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.364 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.364 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.364 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.364 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.364 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.364 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.364 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:15:17.364 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:15:17.364 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.364 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.364 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:17.365 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:15:17.365 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:15:17.625 Error setting digest 00:15:17.625 40B2097C757F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:15:17.625 40B2097C757F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@458 -- # nvmf_veth_init 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:17.625 Cannot find device "nvmf_init_br" 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:17.625 Cannot find device "nvmf_init_br2" 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:17.625 Cannot find device "nvmf_tgt_br" 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:17.625 Cannot find device "nvmf_tgt_br2" 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:17.625 Cannot find device "nvmf_init_br" 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:17.625 Cannot find device "nvmf_init_br2" 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:17.625 Cannot find device "nvmf_tgt_br" 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:17.625 Cannot find device "nvmf_tgt_br2" 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:17.625 Cannot find device "nvmf_br" 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:17.625 Cannot find device "nvmf_init_if" 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:17.625 Cannot find device "nvmf_init_if2" 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:17.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:17.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:17.625 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:17.884 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:17.885 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:17.885 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:15:17.885 00:15:17.885 --- 10.0.0.3 ping statistics --- 00:15:17.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.885 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:17.885 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:17.885 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:15:17.885 00:15:17.885 --- 10.0.0.4 ping statistics --- 00:15:17.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.885 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:17.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:15:17.885 00:15:17.885 --- 10.0.0.1 ping statistics --- 00:15:17.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.885 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:17.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:15:17.885 00:15:17.885 --- 10.0.0.2 ping statistics --- 00:15:17.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.885 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # return 0 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=73412 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 73412 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73412 ']' 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:17.885 08:26:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:18.145 [2024-10-15 08:26:19.696287] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:15:18.145 [2024-10-15 08:26:19.697057] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.145 [2024-10-15 08:26:19.841304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.404 [2024-10-15 08:26:19.928173] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.404 [2024-10-15 08:26:19.928245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.404 [2024-10-15 08:26:19.928261] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.404 [2024-10-15 08:26:19.928271] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.404 [2024-10-15 08:26:19.928282] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.404 [2024-10-15 08:26:19.928817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.404 [2024-10-15 08:26:20.008739] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:19.341 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:19.341 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:15:19.341 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:19.341 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:19.341 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:19.341 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.341 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:15:19.341 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:19.341 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:15:19.341 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.8Nt 00:15:19.341 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:19.341 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.8Nt 00:15:19.341 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.8Nt 00:15:19.341 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.8Nt 00:15:19.341 08:26:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:19.341 [2024-10-15 08:26:21.016965] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.341 [2024-10-15 08:26:21.032876] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:19.341 [2024-10-15 08:26:21.033086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:19.602 malloc0 00:15:19.602 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:19.602 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=73455 00:15:19.602 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 73455 /var/tmp/bdevperf.sock 00:15:19.602 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:19.602 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73455 ']' 00:15:19.602 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:19.602 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:19.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:19.602 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:19.602 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:19.602 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:19.602 [2024-10-15 08:26:21.185925] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:15:19.602 [2024-10-15 08:26:21.186050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73455 ] 00:15:19.602 [2024-10-15 08:26:21.325570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.860 [2024-10-15 08:26:21.414253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.860 [2024-10-15 08:26:21.491628] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:19.860 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:19.860 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:15:19.860 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.8Nt 00:15:20.120 08:26:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:20.378 [2024-10-15 08:26:22.096316] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:20.636 TLSTESTn1 00:15:20.636 08:26:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:20.636 Running I/O for 10 seconds... 00:15:22.948 3840.00 IOPS, 15.00 MiB/s [2024-10-15T08:26:25.621Z] 3914.00 IOPS, 15.29 MiB/s [2024-10-15T08:26:26.558Z] 3929.67 IOPS, 15.35 MiB/s [2024-10-15T08:26:27.494Z] 3941.25 IOPS, 15.40 MiB/s [2024-10-15T08:26:28.494Z] 3942.20 IOPS, 15.40 MiB/s [2024-10-15T08:26:29.430Z] 3943.67 IOPS, 15.40 MiB/s [2024-10-15T08:26:30.365Z] 3943.43 IOPS, 15.40 MiB/s [2024-10-15T08:26:31.802Z] 3944.62 IOPS, 15.41 MiB/s [2024-10-15T08:26:32.367Z] 3951.67 IOPS, 15.44 MiB/s [2024-10-15T08:26:32.367Z] 3951.40 IOPS, 15.44 MiB/s 00:15:30.636 Latency(us) 00:15:30.636 [2024-10-15T08:26:32.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.636 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:30.636 Verification LBA range: start 0x0 length 0x2000 00:15:30.636 TLSTESTn1 : 10.02 3956.52 15.46 0.00 0.00 32288.87 6494.02 23712.12 00:15:30.636 [2024-10-15T08:26:32.367Z] =================================================================================================================== 00:15:30.636 [2024-10-15T08:26:32.367Z] Total : 3956.52 15.46 0.00 0.00 32288.87 6494.02 23712.12 00:15:30.636 { 00:15:30.636 "results": [ 00:15:30.636 { 00:15:30.636 "job": "TLSTESTn1", 00:15:30.636 "core_mask": "0x4", 00:15:30.636 "workload": "verify", 00:15:30.636 "status": "finished", 00:15:30.636 "verify_range": { 00:15:30.636 "start": 0, 00:15:30.636 "length": 8192 00:15:30.636 }, 00:15:30.636 "queue_depth": 128, 00:15:30.636 "io_size": 4096, 00:15:30.636 "runtime": 10.017651, 00:15:30.637 "iops": 3956.5163529853457, 00:15:30.637 "mibps": 15.455142003849007, 00:15:30.637 "io_failed": 0, 00:15:30.637 "io_timeout": 0, 00:15:30.637 "avg_latency_us": 32288.865753087834, 00:15:30.637 "min_latency_us": 6494.021818181818, 00:15:30.637 "max_latency_us": 23712.116363636364 00:15:30.637 } 00:15:30.637 ], 00:15:30.637 "core_count": 1 00:15:30.637 } 00:15:30.896 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:30.896 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:30.896 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:15:30.896 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:15:30.896 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:15:30.896 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:30.896 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:15:30.896 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:15:30.896 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:15:30.896 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:30.896 nvmf_trace.0 00:15:30.896 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:15:30.896 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73455 00:15:30.896 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73455 ']' 00:15:30.896 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73455 00:15:30.896 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:15:30.896 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:30.896 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73455 00:15:30.896 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:30.896 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:30.896 killing process with pid 73455 00:15:30.896 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73455' 00:15:30.896 Received shutdown signal, test time was about 10.000000 seconds 00:15:30.896 00:15:30.896 Latency(us) 00:15:30.896 [2024-10-15T08:26:32.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.896 [2024-10-15T08:26:32.627Z] =================================================================================================================== 00:15:30.896 [2024-10-15T08:26:32.627Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:30.896 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73455 00:15:30.896 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73455 00:15:31.155 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:31.155 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:31.155 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:15:31.155 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:31.155 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:15:31.155 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:31.155 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:31.155 rmmod nvme_tcp 00:15:31.155 rmmod nvme_fabrics 00:15:31.155 rmmod nvme_keyring 00:15:31.155 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:31.155 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:15:31.155 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:15:31.155 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 73412 ']' 00:15:31.155 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 73412 00:15:31.155 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73412 ']' 00:15:31.155 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73412 00:15:31.155 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:15:31.155 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:31.155 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73412 00:15:31.414 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:31.414 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:31.414 killing process with pid 73412 00:15:31.414 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73412' 00:15:31.414 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73412 00:15:31.414 08:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73412 00:15:31.672 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:31.672 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:31.672 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:31.672 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:15:31.672 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:15:31.672 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:31.672 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:15:31.672 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:31.672 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:31.672 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:31.672 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:31.672 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:31.672 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:31.672 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:31.672 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:31.672 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:31.672 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:31.672 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:31.672 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:31.672 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:31.672 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:31.672 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:31.931 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:31.931 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.931 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:31.931 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.931 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:15:31.931 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.8Nt 00:15:31.931 00:15:31.931 real 0m14.624s 00:15:31.931 user 0m19.768s 00:15:31.931 sys 0m5.788s 00:15:31.931 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:31.931 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:31.931 ************************************ 00:15:31.931 END TEST nvmf_fips 00:15:31.931 ************************************ 00:15:31.931 08:26:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:31.931 08:26:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:31.931 08:26:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:31.931 08:26:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:31.931 ************************************ 00:15:31.931 START TEST nvmf_control_msg_list 00:15:31.931 ************************************ 00:15:31.931 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:31.931 * Looking for test storage... 00:15:31.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:31.931 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:31.931 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:15:31.931 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:32.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.190 --rc genhtml_branch_coverage=1 00:15:32.190 --rc genhtml_function_coverage=1 00:15:32.190 --rc genhtml_legend=1 00:15:32.190 --rc geninfo_all_blocks=1 00:15:32.190 --rc geninfo_unexecuted_blocks=1 00:15:32.190 00:15:32.190 ' 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:32.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.190 --rc genhtml_branch_coverage=1 00:15:32.190 --rc genhtml_function_coverage=1 00:15:32.190 --rc genhtml_legend=1 00:15:32.190 --rc geninfo_all_blocks=1 00:15:32.190 --rc geninfo_unexecuted_blocks=1 00:15:32.190 00:15:32.190 ' 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:32.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.190 --rc genhtml_branch_coverage=1 00:15:32.190 --rc genhtml_function_coverage=1 00:15:32.190 --rc genhtml_legend=1 00:15:32.190 --rc geninfo_all_blocks=1 00:15:32.190 --rc geninfo_unexecuted_blocks=1 00:15:32.190 00:15:32.190 ' 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:32.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.190 --rc genhtml_branch_coverage=1 00:15:32.190 --rc genhtml_function_coverage=1 00:15:32.190 --rc genhtml_legend=1 00:15:32.190 --rc geninfo_all_blocks=1 00:15:32.190 --rc geninfo_unexecuted_blocks=1 00:15:32.190 00:15:32.190 ' 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:15:32.190 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:32.191 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@458 -- # nvmf_veth_init 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:32.191 Cannot find device "nvmf_init_br" 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:32.191 Cannot find device "nvmf_init_br2" 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:32.191 Cannot find device "nvmf_tgt_br" 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:32.191 Cannot find device "nvmf_tgt_br2" 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:32.191 Cannot find device "nvmf_init_br" 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:32.191 Cannot find device "nvmf_init_br2" 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:32.191 Cannot find device "nvmf_tgt_br" 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:32.191 Cannot find device "nvmf_tgt_br2" 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:32.191 Cannot find device "nvmf_br" 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:32.191 Cannot find device "nvmf_init_if" 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:32.191 Cannot find device "nvmf_init_if2" 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:32.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:32.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:32.191 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:32.450 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:32.450 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:32.450 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:32.450 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:32.450 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:32.450 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:32.450 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:32.450 08:26:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:32.450 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:32.450 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:15:32.450 00:15:32.450 --- 10.0.0.3 ping statistics --- 00:15:32.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.450 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:32.450 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:32.450 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:15:32.450 00:15:32.450 --- 10.0.0.4 ping statistics --- 00:15:32.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.450 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:32.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:32.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:32.450 00:15:32.450 --- 10.0.0.1 ping statistics --- 00:15:32.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.450 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:32.450 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:32.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:32.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:15:32.450 00:15:32.450 --- 10.0.0.2 ping statistics --- 00:15:32.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.450 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:32.451 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:32.451 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # return 0 00:15:32.451 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:32.451 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:32.451 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:32.451 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:32.451 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:32.451 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:32.451 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:32.451 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:15:32.451 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:32.451 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:32.451 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:32.451 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=73830 00:15:32.451 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:32.451 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 73830 00:15:32.451 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 73830 ']' 00:15:32.451 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.451 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:32.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.451 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.451 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:32.451 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:32.709 [2024-10-15 08:26:34.234098] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:15:32.709 [2024-10-15 08:26:34.234232] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.709 [2024-10-15 08:26:34.378670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.017 [2024-10-15 08:26:34.453991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.017 [2024-10-15 08:26:34.454073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.017 [2024-10-15 08:26:34.454088] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.017 [2024-10-15 08:26:34.454099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.017 [2024-10-15 08:26:34.454109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.017 [2024-10-15 08:26:34.454657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.017 [2024-10-15 08:26:34.531078] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:33.017 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:33.017 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:15:33.017 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:33.017 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:33.017 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:33.017 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.017 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:33.017 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:33.017 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:33.018 [2024-10-15 08:26:34.664435] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:33.018 Malloc0 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:33.018 [2024-10-15 08:26:34.712557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73855 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73856 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73857 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:33.018 08:26:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73855 00:15:33.278 [2024-10-15 08:26:34.896911] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:33.278 [2024-10-15 08:26:34.907507] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:33.278 [2024-10-15 08:26:34.907713] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:34.213 Initializing NVMe Controllers 00:15:34.213 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:34.213 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:15:34.213 Initialization complete. Launching workers. 00:15:34.213 ======================================================== 00:15:34.213 Latency(us) 00:15:34.213 Device Information : IOPS MiB/s Average min max 00:15:34.213 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3374.00 13.18 295.96 141.42 869.01 00:15:34.213 ======================================================== 00:15:34.213 Total : 3374.00 13.18 295.96 141.42 869.01 00:15:34.213 00:15:34.213 Initializing NVMe Controllers 00:15:34.213 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:34.213 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:15:34.213 Initialization complete. Launching workers. 00:15:34.213 ======================================================== 00:15:34.213 Latency(us) 00:15:34.213 Device Information : IOPS MiB/s Average min max 00:15:34.213 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3369.00 13.16 296.47 195.46 1021.54 00:15:34.213 ======================================================== 00:15:34.213 Total : 3369.00 13.16 296.47 195.46 1021.54 00:15:34.213 00:15:34.213 Initializing NVMe Controllers 00:15:34.213 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:34.213 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:15:34.213 Initialization complete. Launching workers. 00:15:34.213 ======================================================== 00:15:34.213 Latency(us) 00:15:34.213 Device Information : IOPS MiB/s Average min max 00:15:34.213 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3376.00 13.19 295.69 195.22 517.52 00:15:34.213 ======================================================== 00:15:34.213 Total : 3376.00 13.19 295.69 195.22 517.52 00:15:34.213 00:15:34.213 08:26:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73856 00:15:34.213 08:26:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73857 00:15:34.213 08:26:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:34.213 08:26:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:15:34.213 08:26:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:34.213 08:26:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:15:34.472 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:34.472 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:15:34.472 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:34.472 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:34.472 rmmod nvme_tcp 00:15:34.472 rmmod nvme_fabrics 00:15:34.472 rmmod nvme_keyring 00:15:34.472 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:34.472 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:15:34.472 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:15:34.472 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 73830 ']' 00:15:34.472 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 73830 00:15:34.472 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 73830 ']' 00:15:34.472 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 73830 00:15:34.472 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:15:34.472 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:34.472 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73830 00:15:34.472 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:34.472 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:34.472 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73830' 00:15:34.472 killing process with pid 73830 00:15:34.472 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 73830 00:15:34.472 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 73830 00:15:34.731 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:34.731 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:34.731 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:34.731 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:15:34.731 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:15:34.731 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:34.731 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:15:34.731 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:34.731 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:34.731 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:34.731 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:34.731 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:34.731 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:34.988 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:34.988 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:34.988 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:34.988 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:34.988 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:34.988 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:34.988 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:34.988 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:34.988 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:34.988 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:34.988 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.988 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.988 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.988 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:15:34.988 00:15:34.988 real 0m3.142s 00:15:34.988 user 0m4.882s 00:15:34.988 sys 0m1.450s 00:15:34.988 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:34.989 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:34.989 ************************************ 00:15:34.989 END TEST nvmf_control_msg_list 00:15:34.989 ************************************ 00:15:34.989 08:26:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:34.989 08:26:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:34.989 08:26:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:34.989 08:26:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:34.989 ************************************ 00:15:34.989 START TEST nvmf_wait_for_buf 00:15:34.989 ************************************ 00:15:34.989 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:35.248 * Looking for test storage... 00:15:35.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:35.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.248 --rc genhtml_branch_coverage=1 00:15:35.248 --rc genhtml_function_coverage=1 00:15:35.248 --rc genhtml_legend=1 00:15:35.248 --rc geninfo_all_blocks=1 00:15:35.248 --rc geninfo_unexecuted_blocks=1 00:15:35.248 00:15:35.248 ' 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:35.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.248 --rc genhtml_branch_coverage=1 00:15:35.248 --rc genhtml_function_coverage=1 00:15:35.248 --rc genhtml_legend=1 00:15:35.248 --rc geninfo_all_blocks=1 00:15:35.248 --rc geninfo_unexecuted_blocks=1 00:15:35.248 00:15:35.248 ' 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:35.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.248 --rc genhtml_branch_coverage=1 00:15:35.248 --rc genhtml_function_coverage=1 00:15:35.248 --rc genhtml_legend=1 00:15:35.248 --rc geninfo_all_blocks=1 00:15:35.248 --rc geninfo_unexecuted_blocks=1 00:15:35.248 00:15:35.248 ' 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:35.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.248 --rc genhtml_branch_coverage=1 00:15:35.248 --rc genhtml_function_coverage=1 00:15:35.248 --rc genhtml_legend=1 00:15:35.248 --rc geninfo_all_blocks=1 00:15:35.248 --rc geninfo_unexecuted_blocks=1 00:15:35.248 00:15:35.248 ' 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:35.248 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:35.248 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@458 -- # nvmf_veth_init 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:35.249 Cannot find device "nvmf_init_br" 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:35.249 Cannot find device "nvmf_init_br2" 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:35.249 Cannot find device "nvmf_tgt_br" 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:35.249 Cannot find device "nvmf_tgt_br2" 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:35.249 Cannot find device "nvmf_init_br" 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:35.249 Cannot find device "nvmf_init_br2" 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:15:35.249 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:35.507 Cannot find device "nvmf_tgt_br" 00:15:35.507 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:15:35.507 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:35.507 Cannot find device "nvmf_tgt_br2" 00:15:35.507 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:15:35.507 08:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:35.507 Cannot find device "nvmf_br" 00:15:35.507 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:15:35.507 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:35.507 Cannot find device "nvmf_init_if" 00:15:35.507 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:35.508 Cannot find device "nvmf_init_if2" 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:35.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:35.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:35.508 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:35.767 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:35.767 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:15:35.767 00:15:35.767 --- 10.0.0.3 ping statistics --- 00:15:35.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.767 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:35.767 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:35.767 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:15:35.767 00:15:35.767 --- 10.0.0.4 ping statistics --- 00:15:35.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.767 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:35.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:35.767 00:15:35.767 --- 10.0.0.1 ping statistics --- 00:15:35.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.767 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:35.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:15:35.767 00:15:35.767 --- 10.0.0.2 ping statistics --- 00:15:35.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.767 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # return 0 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=74092 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 74092 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 74092 ']' 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:35.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:35.767 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:35.767 [2024-10-15 08:26:37.368855] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:15:35.767 [2024-10-15 08:26:37.368986] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.026 [2024-10-15 08:26:37.510936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.026 [2024-10-15 08:26:37.589430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.026 [2024-10-15 08:26:37.589489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.026 [2024-10-15 08:26:37.589500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.026 [2024-10-15 08:26:37.589508] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.026 [2024-10-15 08:26:37.589515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.026 [2024-10-15 08:26:37.589978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.026 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:36.026 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:15:36.026 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:36.026 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:36.026 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:36.026 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.026 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:36.026 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:36.026 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:15:36.026 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.026 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:36.026 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.026 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:15:36.026 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.026 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:36.026 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.026 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:15:36.026 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.026 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:36.026 [2024-10-15 08:26:37.735667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:36.358 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.358 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:36.358 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.358 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:36.358 Malloc0 00:15:36.358 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.358 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:15:36.358 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.358 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:36.358 [2024-10-15 08:26:37.817058] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.358 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.358 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:15:36.358 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.358 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:36.358 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.358 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:36.358 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.358 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:36.358 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.358 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:36.358 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.358 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:36.358 [2024-10-15 08:26:37.841143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:36.358 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.358 08:26:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:36.358 [2024-10-15 08:26:38.015295] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:37.734 Initializing NVMe Controllers 00:15:37.734 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:37.734 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:15:37.734 Initialization complete. Launching workers. 00:15:37.734 ======================================================== 00:15:37.734 Latency(us) 00:15:37.735 Device Information : IOPS MiB/s Average min max 00:15:37.735 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 500.00 62.50 8000.16 3836.85 12120.10 00:15:37.735 ======================================================== 00:15:37.735 Total : 500.00 62.50 8000.16 3836.85 12120.10 00:15:37.735 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:37.735 rmmod nvme_tcp 00:15:37.735 rmmod nvme_fabrics 00:15:37.735 rmmod nvme_keyring 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 74092 ']' 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 74092 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 74092 ']' 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 74092 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:37.735 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74092 00:15:37.993 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:37.993 killing process with pid 74092 00:15:37.993 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:37.993 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74092' 00:15:37.993 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 74092 00:15:37.993 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 74092 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:38.252 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.512 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:15:38.512 00:15:38.512 real 0m3.302s 00:15:38.512 user 0m2.637s 00:15:38.512 sys 0m0.815s 00:15:38.512 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:38.512 08:26:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:38.512 ************************************ 00:15:38.512 END TEST nvmf_wait_for_buf 00:15:38.512 ************************************ 00:15:38.512 08:26:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:15:38.512 08:26:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:15:38.512 08:26:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:38.512 ************************************ 00:15:38.512 END TEST nvmf_target_extra 00:15:38.512 ************************************ 00:15:38.512 00:15:38.512 real 5m15.645s 00:15:38.512 user 11m2.419s 00:15:38.512 sys 1m10.799s 00:15:38.512 08:26:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:38.512 08:26:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:38.512 08:26:40 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:38.512 08:26:40 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:38.512 08:26:40 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:38.512 08:26:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:38.512 ************************************ 00:15:38.512 START TEST nvmf_host 00:15:38.512 ************************************ 00:15:38.512 08:26:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:38.512 * Looking for test storage... 00:15:38.512 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:15:38.512 08:26:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:38.512 08:26:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:15:38.512 08:26:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:38.772 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:38.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.773 --rc genhtml_branch_coverage=1 00:15:38.773 --rc genhtml_function_coverage=1 00:15:38.773 --rc genhtml_legend=1 00:15:38.773 --rc geninfo_all_blocks=1 00:15:38.773 --rc geninfo_unexecuted_blocks=1 00:15:38.773 00:15:38.773 ' 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:38.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.773 --rc genhtml_branch_coverage=1 00:15:38.773 --rc genhtml_function_coverage=1 00:15:38.773 --rc genhtml_legend=1 00:15:38.773 --rc geninfo_all_blocks=1 00:15:38.773 --rc geninfo_unexecuted_blocks=1 00:15:38.773 00:15:38.773 ' 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:38.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.773 --rc genhtml_branch_coverage=1 00:15:38.773 --rc genhtml_function_coverage=1 00:15:38.773 --rc genhtml_legend=1 00:15:38.773 --rc geninfo_all_blocks=1 00:15:38.773 --rc geninfo_unexecuted_blocks=1 00:15:38.773 00:15:38.773 ' 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:38.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:38.773 --rc genhtml_branch_coverage=1 00:15:38.773 --rc genhtml_function_coverage=1 00:15:38.773 --rc genhtml_legend=1 00:15:38.773 --rc geninfo_all_blocks=1 00:15:38.773 --rc geninfo_unexecuted_blocks=1 00:15:38.773 00:15:38.773 ' 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:38.773 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:38.773 ************************************ 00:15:38.773 START TEST nvmf_identify 00:15:38.773 ************************************ 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:38.773 * Looking for test storage... 00:15:38.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:15:38.773 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:39.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.034 --rc genhtml_branch_coverage=1 00:15:39.034 --rc genhtml_function_coverage=1 00:15:39.034 --rc genhtml_legend=1 00:15:39.034 --rc geninfo_all_blocks=1 00:15:39.034 --rc geninfo_unexecuted_blocks=1 00:15:39.034 00:15:39.034 ' 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:39.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.034 --rc genhtml_branch_coverage=1 00:15:39.034 --rc genhtml_function_coverage=1 00:15:39.034 --rc genhtml_legend=1 00:15:39.034 --rc geninfo_all_blocks=1 00:15:39.034 --rc geninfo_unexecuted_blocks=1 00:15:39.034 00:15:39.034 ' 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:39.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.034 --rc genhtml_branch_coverage=1 00:15:39.034 --rc genhtml_function_coverage=1 00:15:39.034 --rc genhtml_legend=1 00:15:39.034 --rc geninfo_all_blocks=1 00:15:39.034 --rc geninfo_unexecuted_blocks=1 00:15:39.034 00:15:39.034 ' 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:39.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.034 --rc genhtml_branch_coverage=1 00:15:39.034 --rc genhtml_function_coverage=1 00:15:39.034 --rc genhtml_legend=1 00:15:39.034 --rc geninfo_all_blocks=1 00:15:39.034 --rc geninfo_unexecuted_blocks=1 00:15:39.034 00:15:39.034 ' 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.034 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:39.035 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # nvmf_veth_init 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:39.035 Cannot find device "nvmf_init_br" 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:39.035 Cannot find device "nvmf_init_br2" 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:39.035 Cannot find device "nvmf_tgt_br" 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:39.035 Cannot find device "nvmf_tgt_br2" 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:39.035 Cannot find device "nvmf_init_br" 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:39.035 Cannot find device "nvmf_init_br2" 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:39.035 Cannot find device "nvmf_tgt_br" 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:39.035 Cannot find device "nvmf_tgt_br2" 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:39.035 Cannot find device "nvmf_br" 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:39.035 Cannot find device "nvmf_init_if" 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:39.035 Cannot find device "nvmf_init_if2" 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:39.035 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:39.035 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:39.035 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:39.295 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:39.295 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:15:39.295 00:15:39.295 --- 10.0.0.3 ping statistics --- 00:15:39.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.295 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:39.295 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:39.295 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:15:39.295 00:15:39.295 --- 10.0.0.4 ping statistics --- 00:15:39.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.295 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:39.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:39.295 00:15:39.295 --- 10.0.0.1 ping statistics --- 00:15:39.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.295 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:39.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:15:39.295 00:15:39.295 --- 10.0.0.2 ping statistics --- 00:15:39.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.295 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # return 0 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74410 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74410 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 74410 ']' 00:15:39.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:39.295 08:26:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:39.295 [2024-10-15 08:26:41.022645] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:15:39.295 [2024-10-15 08:26:41.023073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.554 [2024-10-15 08:26:41.165972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:39.554 [2024-10-15 08:26:41.256687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.554 [2024-10-15 08:26:41.257133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.554 [2024-10-15 08:26:41.257376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.554 [2024-10-15 08:26:41.257522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.554 [2024-10-15 08:26:41.257704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.554 [2024-10-15 08:26:41.259281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.554 [2024-10-15 08:26:41.259424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.554 [2024-10-15 08:26:41.259574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:39.554 [2024-10-15 08:26:41.259575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.866 [2024-10-15 08:26:41.337934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:39.866 [2024-10-15 08:26:41.428714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:39.866 Malloc0 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:39.866 [2024-10-15 08:26:41.549058] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.866 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.154 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.154 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:40.154 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.154 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.154 [ 00:15:40.154 { 00:15:40.154 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:40.154 "subtype": "Discovery", 00:15:40.154 "listen_addresses": [ 00:15:40.154 { 00:15:40.154 "trtype": "TCP", 00:15:40.154 "adrfam": "IPv4", 00:15:40.154 "traddr": "10.0.0.3", 00:15:40.154 "trsvcid": "4420" 00:15:40.154 } 00:15:40.154 ], 00:15:40.154 "allow_any_host": true, 00:15:40.154 "hosts": [] 00:15:40.154 }, 00:15:40.154 { 00:15:40.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.154 "subtype": "NVMe", 00:15:40.154 "listen_addresses": [ 00:15:40.154 { 00:15:40.154 "trtype": "TCP", 00:15:40.154 "adrfam": "IPv4", 00:15:40.154 "traddr": "10.0.0.3", 00:15:40.154 "trsvcid": "4420" 00:15:40.154 } 00:15:40.154 ], 00:15:40.154 "allow_any_host": true, 00:15:40.154 "hosts": [], 00:15:40.154 "serial_number": "SPDK00000000000001", 00:15:40.154 "model_number": "SPDK bdev Controller", 00:15:40.154 "max_namespaces": 32, 00:15:40.154 "min_cntlid": 1, 00:15:40.154 "max_cntlid": 65519, 00:15:40.154 "namespaces": [ 00:15:40.154 { 00:15:40.154 "nsid": 1, 00:15:40.154 "bdev_name": "Malloc0", 00:15:40.154 "name": "Malloc0", 00:15:40.154 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:40.154 "eui64": "ABCDEF0123456789", 00:15:40.154 "uuid": "e248e407-3b90-414b-b95c-b030b72e26a5" 00:15:40.154 } 00:15:40.154 ] 00:15:40.154 } 00:15:40.154 ] 00:15:40.154 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.154 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:40.154 [2024-10-15 08:26:41.606928] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:15:40.154 [2024-10-15 08:26:41.606997] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74438 ] 00:15:40.154 [2024-10-15 08:26:41.749845] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:15:40.154 [2024-10-15 08:26:41.749956] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:40.154 [2024-10-15 08:26:41.749964] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:40.154 [2024-10-15 08:26:41.749980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:40.154 [2024-10-15 08:26:41.749992] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:40.154 [2024-10-15 08:26:41.750453] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:15:40.154 [2024-10-15 08:26:41.750532] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa7f750 0 00:15:40.154 [2024-10-15 08:26:41.758191] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:40.154 [2024-10-15 08:26:41.758223] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:40.154 [2024-10-15 08:26:41.758230] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:40.154 [2024-10-15 08:26:41.758234] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:40.154 [2024-10-15 08:26:41.758280] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.154 [2024-10-15 08:26:41.758288] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.154 [2024-10-15 08:26:41.758293] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa7f750) 00:15:40.154 [2024-10-15 08:26:41.758310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:40.154 [2024-10-15 08:26:41.758343] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3840, cid 0, qid 0 00:15:40.154 [2024-10-15 08:26:41.765172] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.154 [2024-10-15 08:26:41.765213] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.154 [2024-10-15 08:26:41.765234] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.154 [2024-10-15 08:26:41.765240] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3840) on tqpair=0xa7f750 00:15:40.154 [2024-10-15 08:26:41.765255] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:40.155 [2024-10-15 08:26:41.765265] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:15:40.155 [2024-10-15 08:26:41.765271] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:15:40.155 [2024-10-15 08:26:41.765290] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.765296] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.765300] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa7f750) 00:15:40.155 [2024-10-15 08:26:41.765311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.155 [2024-10-15 08:26:41.765340] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3840, cid 0, qid 0 00:15:40.155 [2024-10-15 08:26:41.765408] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.155 [2024-10-15 08:26:41.765416] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.155 [2024-10-15 08:26:41.765420] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.765424] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3840) on tqpair=0xa7f750 00:15:40.155 [2024-10-15 08:26:41.765431] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:15:40.155 [2024-10-15 08:26:41.765439] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:15:40.155 [2024-10-15 08:26:41.765447] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.765451] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.765455] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa7f750) 00:15:40.155 [2024-10-15 08:26:41.765464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.155 [2024-10-15 08:26:41.765484] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3840, cid 0, qid 0 00:15:40.155 [2024-10-15 08:26:41.765535] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.155 [2024-10-15 08:26:41.765542] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.155 [2024-10-15 08:26:41.765546] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.765551] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3840) on tqpair=0xa7f750 00:15:40.155 [2024-10-15 08:26:41.765557] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:15:40.155 [2024-10-15 08:26:41.765566] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:15:40.155 [2024-10-15 08:26:41.765574] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.765579] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.765582] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa7f750) 00:15:40.155 [2024-10-15 08:26:41.765591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.155 [2024-10-15 08:26:41.765610] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3840, cid 0, qid 0 00:15:40.155 [2024-10-15 08:26:41.765665] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.155 [2024-10-15 08:26:41.765672] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.155 [2024-10-15 08:26:41.765676] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.765680] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3840) on tqpair=0xa7f750 00:15:40.155 [2024-10-15 08:26:41.765686] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:40.155 [2024-10-15 08:26:41.765697] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.765702] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.765705] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa7f750) 00:15:40.155 [2024-10-15 08:26:41.765713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.155 [2024-10-15 08:26:41.765732] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3840, cid 0, qid 0 00:15:40.155 [2024-10-15 08:26:41.765776] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.155 [2024-10-15 08:26:41.765784] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.155 [2024-10-15 08:26:41.765788] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.765792] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3840) on tqpair=0xa7f750 00:15:40.155 [2024-10-15 08:26:41.765797] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:15:40.155 [2024-10-15 08:26:41.765803] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:15:40.155 [2024-10-15 08:26:41.765811] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:40.155 [2024-10-15 08:26:41.765917] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:15:40.155 [2024-10-15 08:26:41.765924] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:40.155 [2024-10-15 08:26:41.765935] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.765939] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.765943] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa7f750) 00:15:40.155 [2024-10-15 08:26:41.765951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.155 [2024-10-15 08:26:41.765971] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3840, cid 0, qid 0 00:15:40.155 [2024-10-15 08:26:41.766028] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.155 [2024-10-15 08:26:41.766035] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.155 [2024-10-15 08:26:41.766039] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.766054] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3840) on tqpair=0xa7f750 00:15:40.155 [2024-10-15 08:26:41.766061] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:40.155 [2024-10-15 08:26:41.766072] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.766077] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.766081] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa7f750) 00:15:40.155 [2024-10-15 08:26:41.766089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.155 [2024-10-15 08:26:41.766109] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3840, cid 0, qid 0 00:15:40.155 [2024-10-15 08:26:41.766176] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.155 [2024-10-15 08:26:41.766185] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.155 [2024-10-15 08:26:41.766189] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.766193] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3840) on tqpair=0xa7f750 00:15:40.155 [2024-10-15 08:26:41.766199] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:40.155 [2024-10-15 08:26:41.766204] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:15:40.155 [2024-10-15 08:26:41.766213] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:15:40.155 [2024-10-15 08:26:41.766231] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:15:40.155 [2024-10-15 08:26:41.766243] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.766248] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa7f750) 00:15:40.155 [2024-10-15 08:26:41.766256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.155 [2024-10-15 08:26:41.766280] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3840, cid 0, qid 0 00:15:40.155 [2024-10-15 08:26:41.766370] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.155 [2024-10-15 08:26:41.766378] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.155 [2024-10-15 08:26:41.766382] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.766386] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa7f750): datao=0, datal=4096, cccid=0 00:15:40.155 [2024-10-15 08:26:41.766392] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae3840) on tqpair(0xa7f750): expected_datao=0, payload_size=4096 00:15:40.155 [2024-10-15 08:26:41.766397] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.766406] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.766411] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.766420] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.155 [2024-10-15 08:26:41.766426] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.155 [2024-10-15 08:26:41.766430] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.155 [2024-10-15 08:26:41.766434] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3840) on tqpair=0xa7f750 00:15:40.155 [2024-10-15 08:26:41.766444] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:15:40.155 [2024-10-15 08:26:41.766449] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:15:40.155 [2024-10-15 08:26:41.766454] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:15:40.155 [2024-10-15 08:26:41.766460] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:15:40.155 [2024-10-15 08:26:41.766465] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:15:40.155 [2024-10-15 08:26:41.766470] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:15:40.155 [2024-10-15 08:26:41.766480] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:15:40.155 [2024-10-15 08:26:41.766493] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.766498] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.766502] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa7f750) 00:15:40.156 [2024-10-15 08:26:41.766511] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:40.156 [2024-10-15 08:26:41.766532] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3840, cid 0, qid 0 00:15:40.156 [2024-10-15 08:26:41.766592] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.156 [2024-10-15 08:26:41.766599] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.156 [2024-10-15 08:26:41.766603] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.766608] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3840) on tqpair=0xa7f750 00:15:40.156 [2024-10-15 08:26:41.766617] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.766621] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.766625] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa7f750) 00:15:40.156 [2024-10-15 08:26:41.766632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.156 [2024-10-15 08:26:41.766639] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.766643] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.766647] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa7f750) 00:15:40.156 [2024-10-15 08:26:41.766653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.156 [2024-10-15 08:26:41.766659] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.766664] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.766667] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa7f750) 00:15:40.156 [2024-10-15 08:26:41.766673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.156 [2024-10-15 08:26:41.766680] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.766684] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.766687] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7f750) 00:15:40.156 [2024-10-15 08:26:41.766693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.156 [2024-10-15 08:26:41.766699] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:15:40.156 [2024-10-15 08:26:41.766713] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:40.156 [2024-10-15 08:26:41.766721] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.766725] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa7f750) 00:15:40.156 [2024-10-15 08:26:41.766732] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.156 [2024-10-15 08:26:41.766754] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3840, cid 0, qid 0 00:15:40.156 [2024-10-15 08:26:41.766762] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae39c0, cid 1, qid 0 00:15:40.156 [2024-10-15 08:26:41.766767] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3b40, cid 2, qid 0 00:15:40.156 [2024-10-15 08:26:41.766772] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3cc0, cid 3, qid 0 00:15:40.156 [2024-10-15 08:26:41.766777] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3e40, cid 4, qid 0 00:15:40.156 [2024-10-15 08:26:41.766860] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.156 [2024-10-15 08:26:41.766867] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.156 [2024-10-15 08:26:41.766871] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.766876] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3e40) on tqpair=0xa7f750 00:15:40.156 [2024-10-15 08:26:41.766882] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:15:40.156 [2024-10-15 08:26:41.766887] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:15:40.156 [2024-10-15 08:26:41.766899] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.766904] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa7f750) 00:15:40.156 [2024-10-15 08:26:41.766912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.156 [2024-10-15 08:26:41.766931] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3e40, cid 4, qid 0 00:15:40.156 [2024-10-15 08:26:41.766990] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.156 [2024-10-15 08:26:41.766998] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.156 [2024-10-15 08:26:41.767002] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.767006] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa7f750): datao=0, datal=4096, cccid=4 00:15:40.156 [2024-10-15 08:26:41.767010] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae3e40) on tqpair(0xa7f750): expected_datao=0, payload_size=4096 00:15:40.156 [2024-10-15 08:26:41.767015] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.767023] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.767027] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.767035] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.156 [2024-10-15 08:26:41.767042] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.156 [2024-10-15 08:26:41.767046] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.767050] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3e40) on tqpair=0xa7f750 00:15:40.156 [2024-10-15 08:26:41.767064] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:15:40.156 [2024-10-15 08:26:41.767100] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.767107] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa7f750) 00:15:40.156 [2024-10-15 08:26:41.767127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.156 [2024-10-15 08:26:41.767138] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.767142] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.767146] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa7f750) 00:15:40.156 [2024-10-15 08:26:41.767153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.156 [2024-10-15 08:26:41.767181] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3e40, cid 4, qid 0 00:15:40.156 [2024-10-15 08:26:41.767189] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3fc0, cid 5, qid 0 00:15:40.156 [2024-10-15 08:26:41.767287] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.156 [2024-10-15 08:26:41.767295] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.156 [2024-10-15 08:26:41.767299] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.767303] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa7f750): datao=0, datal=1024, cccid=4 00:15:40.156 [2024-10-15 08:26:41.767307] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae3e40) on tqpair(0xa7f750): expected_datao=0, payload_size=1024 00:15:40.156 [2024-10-15 08:26:41.767312] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.767319] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.767323] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.767330] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.156 [2024-10-15 08:26:41.767336] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.156 [2024-10-15 08:26:41.767340] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.767344] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3fc0) on tqpair=0xa7f750 00:15:40.156 [2024-10-15 08:26:41.767363] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.156 [2024-10-15 08:26:41.767371] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.156 [2024-10-15 08:26:41.767375] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.767379] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3e40) on tqpair=0xa7f750 00:15:40.156 [2024-10-15 08:26:41.767392] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.767397] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa7f750) 00:15:40.156 [2024-10-15 08:26:41.767405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.156 [2024-10-15 08:26:41.767430] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3e40, cid 4, qid 0 00:15:40.156 [2024-10-15 08:26:41.767498] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.156 [2024-10-15 08:26:41.767505] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.156 [2024-10-15 08:26:41.767509] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.767513] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa7f750): datao=0, datal=3072, cccid=4 00:15:40.156 [2024-10-15 08:26:41.767518] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae3e40) on tqpair(0xa7f750): expected_datao=0, payload_size=3072 00:15:40.156 [2024-10-15 08:26:41.767522] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.767530] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.767534] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.767542] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.156 [2024-10-15 08:26:41.767549] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.156 [2024-10-15 08:26:41.767553] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.767557] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3e40) on tqpair=0xa7f750 00:15:40.156 [2024-10-15 08:26:41.767567] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.156 [2024-10-15 08:26:41.767572] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa7f750) 00:15:40.157 [2024-10-15 08:26:41.767579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.157 [2024-10-15 08:26:41.767603] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3e40, cid 4, qid 0 00:15:40.157 [2024-10-15 08:26:41.767667] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.157 [2024-10-15 08:26:41.767674] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.157 [2024-10-15 08:26:41.767678] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.157 [2024-10-15 08:26:41.767682] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa7f750): datao=0, datal=8, cccid=4 00:15:40.157 ===================================================== 00:15:40.157 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:40.157 ===================================================== 00:15:40.157 Controller Capabilities/Features 00:15:40.157 ================================ 00:15:40.157 Vendor ID: 0000 00:15:40.157 Subsystem Vendor ID: 0000 00:15:40.157 Serial Number: .................... 00:15:40.157 Model Number: ........................................ 00:15:40.157 Firmware Version: 25.01 00:15:40.157 Recommended Arb Burst: 0 00:15:40.157 IEEE OUI Identifier: 00 00 00 00:15:40.157 Multi-path I/O 00:15:40.157 May have multiple subsystem ports: No 00:15:40.157 May have multiple controllers: No 00:15:40.157 Associated with SR-IOV VF: No 00:15:40.157 Max Data Transfer Size: 131072 00:15:40.157 Max Number of Namespaces: 0 00:15:40.157 Max Number of I/O Queues: 1024 00:15:40.157 NVMe Specification Version (VS): 1.3 00:15:40.157 NVMe Specification Version (Identify): 1.3 00:15:40.157 Maximum Queue Entries: 128 00:15:40.157 Contiguous Queues Required: Yes 00:15:40.157 Arbitration Mechanisms Supported 00:15:40.157 Weighted Round Robin: Not Supported 00:15:40.157 Vendor Specific: Not Supported 00:15:40.157 Reset Timeout: 15000 ms 00:15:40.157 Doorbell Stride: 4 bytes 00:15:40.157 NVM Subsystem Reset: Not Supported 00:15:40.157 Command Sets Supported 00:15:40.157 NVM Command Set: Supported 00:15:40.157 Boot Partition: Not Supported 00:15:40.157 Memory Page Size Minimum: 4096 bytes 00:15:40.157 Memory Page Size Maximum: 4096 bytes 00:15:40.157 Persistent Memory Region: Not Supported 00:15:40.157 Optional Asynchronous Events Supported 00:15:40.157 Namespace Attribute Notices: Not Supported 00:15:40.157 Firmware Activation Notices: Not Supported 00:15:40.157 ANA Change Notices: Not Supported 00:15:40.157 PLE Aggregate Log Change Notices: Not Supported 00:15:40.157 LBA Status Info Alert Notices: Not Supported 00:15:40.157 EGE Aggregate Log Change Notices: Not Supported 00:15:40.157 Normal NVM Subsystem Shutdown event: Not Supported 00:15:40.157 Zone Descriptor Change Notices: Not Supported 00:15:40.157 Discovery Log Change Notices: Supported 00:15:40.157 Controller Attributes 00:15:40.157 128-bit Host Identifier: Not Supported 00:15:40.157 Non-Operational Permissive Mode: Not Supported 00:15:40.157 NVM Sets: Not Supported 00:15:40.157 Read Recovery Levels: Not Supported 00:15:40.157 Endurance Groups: Not Supported 00:15:40.157 Predictable Latency Mode: Not Supported 00:15:40.157 Traffic Based Keep ALive: Not Supported 00:15:40.157 Namespace Granularity: Not Supported 00:15:40.157 SQ Associations: Not Supported 00:15:40.157 UUID List: Not Supported 00:15:40.157 Multi-Domain Subsystem: Not Supported 00:15:40.157 Fixed Capacity Management: Not Supported 00:15:40.157 Variable Capacity Management: Not Supported 00:15:40.157 Delete Endurance Group: Not Supported 00:15:40.157 Delete NVM Set: Not Supported 00:15:40.157 Extended LBA Formats Supported: Not Supported 00:15:40.157 Flexible Data Placement Supported: Not Supported 00:15:40.157 00:15:40.157 Controller Memory Buffer Support 00:15:40.157 ================================ 00:15:40.157 Supported: No 00:15:40.157 00:15:40.157 Persistent Memory Region Support 00:15:40.157 ================================ 00:15:40.157 Supported: No 00:15:40.157 00:15:40.157 Admin Command Set Attributes 00:15:40.157 ============================ 00:15:40.157 Security Send/Receive: Not Supported 00:15:40.157 Format NVM: Not Supported 00:15:40.157 Firmware Activate/Download: Not Supported 00:15:40.157 Namespace Management: Not Supported 00:15:40.157 Device Self-Test: Not Supported 00:15:40.157 Directives: Not Supported 00:15:40.157 NVMe-MI: Not Supported 00:15:40.157 Virtualization Management: Not Supported 00:15:40.157 Doorbell Buffer Config: Not Supported 00:15:40.157 Get LBA Status Capability: Not Supported 00:15:40.157 Command & Feature Lockdown Capability: Not Supported 00:15:40.157 Abort Command Limit: 1 00:15:40.157 Async Event Request Limit: 4 00:15:40.157 Number of Firmware Slots: N/A 00:15:40.157 Firmware Slot 1 Read-Only: N/A 00:15:40.157 Firm[2024-10-15 08:26:41.767687] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xae3e40) on tqpair(0xa7f750): expected_datao=0, payload_size=8 00:15:40.157 [2024-10-15 08:26:41.767692] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.157 [2024-10-15 08:26:41.767699] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.157 [2024-10-15 08:26:41.767703] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.157 [2024-10-15 08:26:41.767719] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.157 [2024-10-15 08:26:41.767727] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.157 [2024-10-15 08:26:41.767731] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.157 [2024-10-15 08:26:41.767735] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3e40) on tqpair=0xa7f750 00:15:40.157 ware Activation Without Reset: N/A 00:15:40.157 Multiple Update Detection Support: N/A 00:15:40.157 Firmware Update Granularity: No Information Provided 00:15:40.157 Per-Namespace SMART Log: No 00:15:40.157 Asymmetric Namespace Access Log Page: Not Supported 00:15:40.157 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:40.157 Command Effects Log Page: Not Supported 00:15:40.157 Get Log Page Extended Data: Supported 00:15:40.157 Telemetry Log Pages: Not Supported 00:15:40.157 Persistent Event Log Pages: Not Supported 00:15:40.157 Supported Log Pages Log Page: May Support 00:15:40.157 Commands Supported & Effects Log Page: Not Supported 00:15:40.157 Feature Identifiers & Effects Log Page:May Support 00:15:40.157 NVMe-MI Commands & Effects Log Page: May Support 00:15:40.157 Data Area 4 for Telemetry Log: Not Supported 00:15:40.157 Error Log Page Entries Supported: 128 00:15:40.157 Keep Alive: Not Supported 00:15:40.157 00:15:40.157 NVM Command Set Attributes 00:15:40.157 ========================== 00:15:40.157 Submission Queue Entry Size 00:15:40.157 Max: 1 00:15:40.157 Min: 1 00:15:40.157 Completion Queue Entry Size 00:15:40.157 Max: 1 00:15:40.157 Min: 1 00:15:40.157 Number of Namespaces: 0 00:15:40.157 Compare Command: Not Supported 00:15:40.157 Write Uncorrectable Command: Not Supported 00:15:40.157 Dataset Management Command: Not Supported 00:15:40.157 Write Zeroes Command: Not Supported 00:15:40.157 Set Features Save Field: Not Supported 00:15:40.157 Reservations: Not Supported 00:15:40.157 Timestamp: Not Supported 00:15:40.157 Copy: Not Supported 00:15:40.157 Volatile Write Cache: Not Present 00:15:40.157 Atomic Write Unit (Normal): 1 00:15:40.157 Atomic Write Unit (PFail): 1 00:15:40.157 Atomic Compare & Write Unit: 1 00:15:40.157 Fused Compare & Write: Supported 00:15:40.157 Scatter-Gather List 00:15:40.157 SGL Command Set: Supported 00:15:40.157 SGL Keyed: Supported 00:15:40.157 SGL Bit Bucket Descriptor: Not Supported 00:15:40.157 SGL Metadata Pointer: Not Supported 00:15:40.157 Oversized SGL: Not Supported 00:15:40.157 SGL Metadata Address: Not Supported 00:15:40.157 SGL Offset: Supported 00:15:40.157 Transport SGL Data Block: Not Supported 00:15:40.157 Replay Protected Memory Block: Not Supported 00:15:40.157 00:15:40.157 Firmware Slot Information 00:15:40.157 ========================= 00:15:40.157 Active slot: 0 00:15:40.157 00:15:40.157 00:15:40.157 Error Log 00:15:40.157 ========= 00:15:40.157 00:15:40.157 Active Namespaces 00:15:40.157 ================= 00:15:40.157 Discovery Log Page 00:15:40.157 ================== 00:15:40.157 Generation Counter: 2 00:15:40.157 Number of Records: 2 00:15:40.157 Record Format: 0 00:15:40.157 00:15:40.157 Discovery Log Entry 0 00:15:40.157 ---------------------- 00:15:40.157 Transport Type: 3 (TCP) 00:15:40.157 Address Family: 1 (IPv4) 00:15:40.157 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:40.157 Entry Flags: 00:15:40.157 Duplicate Returned Information: 1 00:15:40.157 Explicit Persistent Connection Support for Discovery: 1 00:15:40.157 Transport Requirements: 00:15:40.157 Secure Channel: Not Required 00:15:40.157 Port ID: 0 (0x0000) 00:15:40.157 Controller ID: 65535 (0xffff) 00:15:40.157 Admin Max SQ Size: 128 00:15:40.158 Transport Service Identifier: 4420 00:15:40.158 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:40.158 Transport Address: 10.0.0.3 00:15:40.158 Discovery Log Entry 1 00:15:40.158 ---------------------- 00:15:40.158 Transport Type: 3 (TCP) 00:15:40.158 Address Family: 1 (IPv4) 00:15:40.158 Subsystem Type: 2 (NVM Subsystem) 00:15:40.158 Entry Flags: 00:15:40.158 Duplicate Returned Information: 0 00:15:40.158 Explicit Persistent Connection Support for Discovery: 0 00:15:40.158 Transport Requirements: 00:15:40.158 Secure Channel: Not Required 00:15:40.158 Port ID: 0 (0x0000) 00:15:40.158 Controller ID: 65535 (0xffff) 00:15:40.158 Admin Max SQ Size: 128 00:15:40.158 Transport Service Identifier: 4420 00:15:40.158 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:40.158 Transport Address: 10.0.0.3 [2024-10-15 08:26:41.767850] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:15:40.158 [2024-10-15 08:26:41.767865] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3840) on tqpair=0xa7f750 00:15:40.158 [2024-10-15 08:26:41.767874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.158 [2024-10-15 08:26:41.767880] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae39c0) on tqpair=0xa7f750 00:15:40.158 [2024-10-15 08:26:41.767885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.158 [2024-10-15 08:26:41.767890] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3b40) on tqpair=0xa7f750 00:15:40.158 [2024-10-15 08:26:41.767895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.158 [2024-10-15 08:26:41.767900] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3cc0) on tqpair=0xa7f750 00:15:40.158 [2024-10-15 08:26:41.767905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.158 [2024-10-15 08:26:41.767915] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.767920] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.767924] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7f750) 00:15:40.158 [2024-10-15 08:26:41.767932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.158 [2024-10-15 08:26:41.767957] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3cc0, cid 3, qid 0 00:15:40.158 [2024-10-15 08:26:41.768009] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.158 [2024-10-15 08:26:41.768016] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.158 [2024-10-15 08:26:41.768020] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768025] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3cc0) on tqpair=0xa7f750 00:15:40.158 [2024-10-15 08:26:41.768033] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768038] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768042] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7f750) 00:15:40.158 [2024-10-15 08:26:41.768049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.158 [2024-10-15 08:26:41.768072] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3cc0, cid 3, qid 0 00:15:40.158 [2024-10-15 08:26:41.768157] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.158 [2024-10-15 08:26:41.768166] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.158 [2024-10-15 08:26:41.768170] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768174] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3cc0) on tqpair=0xa7f750 00:15:40.158 [2024-10-15 08:26:41.768180] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:15:40.158 [2024-10-15 08:26:41.768185] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:15:40.158 [2024-10-15 08:26:41.768196] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768201] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768205] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7f750) 00:15:40.158 [2024-10-15 08:26:41.768212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.158 [2024-10-15 08:26:41.768233] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3cc0, cid 3, qid 0 00:15:40.158 [2024-10-15 08:26:41.768280] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.158 [2024-10-15 08:26:41.768287] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.158 [2024-10-15 08:26:41.768291] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768295] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3cc0) on tqpair=0xa7f750 00:15:40.158 [2024-10-15 08:26:41.768306] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768311] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768316] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7f750) 00:15:40.158 [2024-10-15 08:26:41.768323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.158 [2024-10-15 08:26:41.768342] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3cc0, cid 3, qid 0 00:15:40.158 [2024-10-15 08:26:41.768392] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.158 [2024-10-15 08:26:41.768400] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.158 [2024-10-15 08:26:41.768404] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768408] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3cc0) on tqpair=0xa7f750 00:15:40.158 [2024-10-15 08:26:41.768418] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768423] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768427] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7f750) 00:15:40.158 [2024-10-15 08:26:41.768435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.158 [2024-10-15 08:26:41.768453] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3cc0, cid 3, qid 0 00:15:40.158 [2024-10-15 08:26:41.768500] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.158 [2024-10-15 08:26:41.768507] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.158 [2024-10-15 08:26:41.768511] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768515] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3cc0) on tqpair=0xa7f750 00:15:40.158 [2024-10-15 08:26:41.768526] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768531] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768534] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7f750) 00:15:40.158 [2024-10-15 08:26:41.768542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.158 [2024-10-15 08:26:41.768560] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3cc0, cid 3, qid 0 00:15:40.158 [2024-10-15 08:26:41.768603] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.158 [2024-10-15 08:26:41.768611] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.158 [2024-10-15 08:26:41.768615] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768619] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3cc0) on tqpair=0xa7f750 00:15:40.158 [2024-10-15 08:26:41.768630] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768635] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768639] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7f750) 00:15:40.158 [2024-10-15 08:26:41.768646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.158 [2024-10-15 08:26:41.768665] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3cc0, cid 3, qid 0 00:15:40.158 [2024-10-15 08:26:41.768714] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.158 [2024-10-15 08:26:41.768721] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.158 [2024-10-15 08:26:41.768725] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768729] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3cc0) on tqpair=0xa7f750 00:15:40.158 [2024-10-15 08:26:41.768740] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768745] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768749] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7f750) 00:15:40.158 [2024-10-15 08:26:41.768756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.158 [2024-10-15 08:26:41.768774] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3cc0, cid 3, qid 0 00:15:40.158 [2024-10-15 08:26:41.768821] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.158 [2024-10-15 08:26:41.768828] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.158 [2024-10-15 08:26:41.768832] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768836] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3cc0) on tqpair=0xa7f750 00:15:40.158 [2024-10-15 08:26:41.768847] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768851] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.158 [2024-10-15 08:26:41.768855] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7f750) 00:15:40.158 [2024-10-15 08:26:41.768863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.159 [2024-10-15 08:26:41.768881] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3cc0, cid 3, qid 0 00:15:40.159 [2024-10-15 08:26:41.768930] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.159 [2024-10-15 08:26:41.768938] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.159 [2024-10-15 08:26:41.768941] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.159 [2024-10-15 08:26:41.768946] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3cc0) on tqpair=0xa7f750 00:15:40.159 [2024-10-15 08:26:41.768956] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.159 [2024-10-15 08:26:41.768961] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.159 [2024-10-15 08:26:41.768965] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7f750) 00:15:40.159 [2024-10-15 08:26:41.768972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.159 [2024-10-15 08:26:41.768990] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3cc0, cid 3, qid 0 00:15:40.159 [2024-10-15 08:26:41.769043] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.159 [2024-10-15 08:26:41.769050] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.159 [2024-10-15 08:26:41.769054] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.159 [2024-10-15 08:26:41.769058] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3cc0) on tqpair=0xa7f750 00:15:40.159 [2024-10-15 08:26:41.769069] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.159 [2024-10-15 08:26:41.769074] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.159 [2024-10-15 08:26:41.769078] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7f750) 00:15:40.159 [2024-10-15 08:26:41.769085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.159 [2024-10-15 08:26:41.769103] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3cc0, cid 3, qid 0 00:15:40.159 [2024-10-15 08:26:41.773135] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.159 [2024-10-15 08:26:41.773158] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.159 [2024-10-15 08:26:41.773173] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.159 [2024-10-15 08:26:41.773178] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3cc0) on tqpair=0xa7f750 00:15:40.159 [2024-10-15 08:26:41.773193] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.159 [2024-10-15 08:26:41.773199] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.159 [2024-10-15 08:26:41.773203] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa7f750) 00:15:40.159 [2024-10-15 08:26:41.773212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.159 [2024-10-15 08:26:41.773239] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xae3cc0, cid 3, qid 0 00:15:40.159 [2024-10-15 08:26:41.773287] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.159 [2024-10-15 08:26:41.773295] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.159 [2024-10-15 08:26:41.773299] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.159 [2024-10-15 08:26:41.773303] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xae3cc0) on tqpair=0xa7f750 00:15:40.159 [2024-10-15 08:26:41.773312] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:15:40.159 00:15:40.159 08:26:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:40.159 [2024-10-15 08:26:41.818602] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:15:40.159 [2024-10-15 08:26:41.818656] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74440 ] 00:15:40.422 [2024-10-15 08:26:41.967431] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:15:40.422 [2024-10-15 08:26:41.967554] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:40.422 [2024-10-15 08:26:41.967567] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:40.422 [2024-10-15 08:26:41.967590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:40.422 [2024-10-15 08:26:41.967608] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:40.422 [2024-10-15 08:26:41.968103] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:15:40.422 [2024-10-15 08:26:41.968223] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x95a750 0 00:15:40.422 [2024-10-15 08:26:41.983225] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:40.422 [2024-10-15 08:26:41.983284] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:40.422 [2024-10-15 08:26:41.983298] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:40.422 [2024-10-15 08:26:41.983306] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:40.422 [2024-10-15 08:26:41.983392] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.422 [2024-10-15 08:26:41.983407] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.422 [2024-10-15 08:26:41.983417] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95a750) 00:15:40.422 [2024-10-15 08:26:41.983447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:40.422 [2024-10-15 08:26:41.983512] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9be840, cid 0, qid 0 00:15:40.422 [2024-10-15 08:26:41.991164] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.422 [2024-10-15 08:26:41.991216] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.422 [2024-10-15 08:26:41.991228] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.422 [2024-10-15 08:26:41.991241] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9be840) on tqpair=0x95a750 00:15:40.422 [2024-10-15 08:26:41.991272] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:40.422 [2024-10-15 08:26:41.991291] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:15:40.422 [2024-10-15 08:26:41.991307] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:15:40.422 [2024-10-15 08:26:41.991343] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.422 [2024-10-15 08:26:41.991355] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.422 [2024-10-15 08:26:41.991364] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95a750) 00:15:40.422 [2024-10-15 08:26:41.991387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.422 [2024-10-15 08:26:41.991449] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9be840, cid 0, qid 0 00:15:40.422 [2024-10-15 08:26:41.991511] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.422 [2024-10-15 08:26:41.991538] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.422 [2024-10-15 08:26:41.991553] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.422 [2024-10-15 08:26:41.991568] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9be840) on tqpair=0x95a750 00:15:40.422 [2024-10-15 08:26:41.991587] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:15:40.422 [2024-10-15 08:26:41.991616] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:15:40.422 [2024-10-15 08:26:41.991646] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.422 [2024-10-15 08:26:41.991663] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.422 [2024-10-15 08:26:41.991677] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95a750) 00:15:40.423 [2024-10-15 08:26:41.991696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.423 [2024-10-15 08:26:41.991758] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9be840, cid 0, qid 0 00:15:40.423 [2024-10-15 08:26:41.991807] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.423 [2024-10-15 08:26:41.991831] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.423 [2024-10-15 08:26:41.991846] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.423 [2024-10-15 08:26:41.991862] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9be840) on tqpair=0x95a750 00:15:40.423 [2024-10-15 08:26:41.991883] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:15:40.423 [2024-10-15 08:26:41.991912] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:15:40.423 [2024-10-15 08:26:41.991933] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.423 [2024-10-15 08:26:41.991949] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.423 [2024-10-15 08:26:41.991963] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95a750) 00:15:40.423 [2024-10-15 08:26:41.991991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.423 [2024-10-15 08:26:41.992057] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9be840, cid 0, qid 0 00:15:40.423 [2024-10-15 08:26:41.992101] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.423 [2024-10-15 08:26:41.992201] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.423 [2024-10-15 08:26:41.992217] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.423 [2024-10-15 08:26:41.992232] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9be840) on tqpair=0x95a750 00:15:40.423 [2024-10-15 08:26:41.992253] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:40.423 [2024-10-15 08:26:41.992291] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.423 [2024-10-15 08:26:41.992308] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.423 [2024-10-15 08:26:41.992322] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95a750) 00:15:40.423 [2024-10-15 08:26:41.992351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.423 [2024-10-15 08:26:41.992417] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9be840, cid 0, qid 0 00:15:40.423 [2024-10-15 08:26:41.992469] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.423 [2024-10-15 08:26:41.992494] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.423 [2024-10-15 08:26:41.992509] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.423 [2024-10-15 08:26:41.992525] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9be840) on tqpair=0x95a750 00:15:40.423 [2024-10-15 08:26:41.992543] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:15:40.423 [2024-10-15 08:26:41.992561] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:15:40.423 [2024-10-15 08:26:41.992586] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:40.423 [2024-10-15 08:26:41.992703] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:15:40.423 [2024-10-15 08:26:41.992717] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:40.423 [2024-10-15 08:26:41.992762] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.423 [2024-10-15 08:26:41.992774] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.423 [2024-10-15 08:26:41.992789] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95a750) 00:15:40.423 [2024-10-15 08:26:41.992817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.423 [2024-10-15 08:26:41.992878] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9be840, cid 0, qid 0 00:15:40.423 [2024-10-15 08:26:41.992939] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.423 [2024-10-15 08:26:41.992966] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.423 [2024-10-15 08:26:41.992980] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.423 [2024-10-15 08:26:41.992995] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9be840) on tqpair=0x95a750 00:15:40.423 [2024-10-15 08:26:41.993016] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:40.423 [2024-10-15 08:26:41.993055] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.423 [2024-10-15 08:26:41.993072] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.423 [2024-10-15 08:26:41.993081] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95a750) 00:15:40.423 [2024-10-15 08:26:41.993100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.423 [2024-10-15 08:26:41.993197] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9be840, cid 0, qid 0 00:15:40.423 [2024-10-15 08:26:41.993248] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.423 [2024-10-15 08:26:41.993271] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.423 [2024-10-15 08:26:41.993281] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.423 [2024-10-15 08:26:41.993290] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9be840) on tqpair=0x95a750 00:15:40.423 [2024-10-15 08:26:41.993303] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:40.423 [2024-10-15 08:26:41.993322] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:15:40.423 [2024-10-15 08:26:41.993352] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:15:40.423 [2024-10-15 08:26:41.993407] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:15:40.423 [2024-10-15 08:26:41.993440] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.423 [2024-10-15 08:26:41.993452] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95a750) 00:15:40.423 [2024-10-15 08:26:41.993471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.423 [2024-10-15 08:26:41.993536] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9be840, cid 0, qid 0 00:15:40.423 [2024-10-15 08:26:41.993640] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.423 [2024-10-15 08:26:41.993666] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.423 [2024-10-15 08:26:41.993681] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.423 [2024-10-15 08:26:41.993696] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x95a750): datao=0, datal=4096, cccid=0 00:15:40.423 [2024-10-15 08:26:41.993715] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9be840) on tqpair(0x95a750): expected_datao=0, payload_size=4096 00:15:40.423 [2024-10-15 08:26:41.993732] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.423 [2024-10-15 08:26:41.993762] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.423 [2024-10-15 08:26:41.993778] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.423 [2024-10-15 08:26:41.993805] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.423 [2024-10-15 08:26:41.993821] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.423 [2024-10-15 08:26:41.993834] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.423 [2024-10-15 08:26:41.993850] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9be840) on tqpair=0x95a750 00:15:40.423 [2024-10-15 08:26:41.993879] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:15:40.423 [2024-10-15 08:26:41.993899] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:15:40.423 [2024-10-15 08:26:41.993915] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:15:40.423 [2024-10-15 08:26:41.993929] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:15:40.423 [2024-10-15 08:26:41.993948] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:15:40.423 [2024-10-15 08:26:41.993967] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:15:40.423 [2024-10-15 08:26:41.993995] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:15:40.423 [2024-10-15 08:26:41.994035] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.423 [2024-10-15 08:26:41.994077] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.423 [2024-10-15 08:26:41.994091] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95a750) 00:15:40.423 [2024-10-15 08:26:41.994157] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:40.423 [2024-10-15 08:26:41.994228] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9be840, cid 0, qid 0 00:15:40.423 [2024-10-15 08:26:41.994282] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.423 [2024-10-15 08:26:41.994309] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.423 [2024-10-15 08:26:41.994324] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.423 [2024-10-15 08:26:41.994339] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9be840) on tqpair=0x95a750 00:15:40.423 [2024-10-15 08:26:41.994361] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.994371] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.994382] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x95a750) 00:15:40.424 [2024-10-15 08:26:41.994407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.424 [2024-10-15 08:26:41.994432] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.994448] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.994463] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x95a750) 00:15:40.424 [2024-10-15 08:26:41.994486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.424 [2024-10-15 08:26:41.994505] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.994515] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.994523] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x95a750) 00:15:40.424 [2024-10-15 08:26:41.994538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.424 [2024-10-15 08:26:41.994555] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.994569] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.994584] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95a750) 00:15:40.424 [2024-10-15 08:26:41.994608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.424 [2024-10-15 08:26:41.994628] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:40.424 [2024-10-15 08:26:41.994665] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:40.424 [2024-10-15 08:26:41.994690] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.994702] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x95a750) 00:15:40.424 [2024-10-15 08:26:41.994721] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.424 [2024-10-15 08:26:41.994793] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9be840, cid 0, qid 0 00:15:40.424 [2024-10-15 08:26:41.994815] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9be9c0, cid 1, qid 0 00:15:40.424 [2024-10-15 08:26:41.994829] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9beb40, cid 2, qid 0 00:15:40.424 [2024-10-15 08:26:41.994847] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9becc0, cid 3, qid 0 00:15:40.424 [2024-10-15 08:26:41.994859] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bee40, cid 4, qid 0 00:15:40.424 [2024-10-15 08:26:41.994907] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.424 [2024-10-15 08:26:41.994933] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.424 [2024-10-15 08:26:41.994948] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.994963] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bee40) on tqpair=0x95a750 00:15:40.424 [2024-10-15 08:26:41.994976] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:15:40.424 [2024-10-15 08:26:41.994995] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:40.424 [2024-10-15 08:26:41.995038] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:15:40.424 [2024-10-15 08:26:41.995065] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:40.424 [2024-10-15 08:26:41.995089] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.995103] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.995112] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x95a750) 00:15:40.424 [2024-10-15 08:26:41.999181] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:40.424 [2024-10-15 08:26:41.999220] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bee40, cid 4, qid 0 00:15:40.424 [2024-10-15 08:26:41.999288] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.424 [2024-10-15 08:26:41.999296] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.424 [2024-10-15 08:26:41.999300] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.999304] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bee40) on tqpair=0x95a750 00:15:40.424 [2024-10-15 08:26:41.999385] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:15:40.424 [2024-10-15 08:26:41.999398] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:40.424 [2024-10-15 08:26:41.999413] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.999421] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x95a750) 00:15:40.424 [2024-10-15 08:26:41.999432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.424 [2024-10-15 08:26:41.999463] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bee40, cid 4, qid 0 00:15:40.424 [2024-10-15 08:26:41.999531] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.424 [2024-10-15 08:26:41.999545] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.424 [2024-10-15 08:26:41.999552] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.999559] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x95a750): datao=0, datal=4096, cccid=4 00:15:40.424 [2024-10-15 08:26:41.999565] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9bee40) on tqpair(0x95a750): expected_datao=0, payload_size=4096 00:15:40.424 [2024-10-15 08:26:41.999571] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.999580] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.999585] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.999595] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.424 [2024-10-15 08:26:41.999601] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.424 [2024-10-15 08:26:41.999604] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.999609] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bee40) on tqpair=0x95a750 00:15:40.424 [2024-10-15 08:26:41.999641] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:15:40.424 [2024-10-15 08:26:41.999660] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:15:40.424 [2024-10-15 08:26:41.999676] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:15:40.424 [2024-10-15 08:26:41.999691] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.999699] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x95a750) 00:15:40.424 [2024-10-15 08:26:41.999707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.424 [2024-10-15 08:26:41.999742] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bee40, cid 4, qid 0 00:15:40.424 [2024-10-15 08:26:41.999823] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.424 [2024-10-15 08:26:41.999837] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.424 [2024-10-15 08:26:41.999843] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.999847] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x95a750): datao=0, datal=4096, cccid=4 00:15:40.424 [2024-10-15 08:26:41.999852] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9bee40) on tqpair(0x95a750): expected_datao=0, payload_size=4096 00:15:40.424 [2024-10-15 08:26:41.999857] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.999865] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.999869] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.999878] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.424 [2024-10-15 08:26:41.999884] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.424 [2024-10-15 08:26:41.999890] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.999897] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bee40) on tqpair=0x95a750 00:15:40.424 [2024-10-15 08:26:41.999923] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:40.424 [2024-10-15 08:26:41.999939] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:40.424 [2024-10-15 08:26:41.999952] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:41.999960] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x95a750) 00:15:40.424 [2024-10-15 08:26:41.999971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.424 [2024-10-15 08:26:41.999995] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bee40, cid 4, qid 0 00:15:40.424 [2024-10-15 08:26:42.000063] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.424 [2024-10-15 08:26:42.000076] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.424 [2024-10-15 08:26:42.000083] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:42.000089] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x95a750): datao=0, datal=4096, cccid=4 00:15:40.424 [2024-10-15 08:26:42.000094] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9bee40) on tqpair(0x95a750): expected_datao=0, payload_size=4096 00:15:40.424 [2024-10-15 08:26:42.000099] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:42.000106] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:42.000110] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:42.000139] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.424 [2024-10-15 08:26:42.000151] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.424 [2024-10-15 08:26:42.000158] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.424 [2024-10-15 08:26:42.000164] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bee40) on tqpair=0x95a750 00:15:40.425 [2024-10-15 08:26:42.000181] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:40.425 [2024-10-15 08:26:42.000192] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:15:40.425 [2024-10-15 08:26:42.000205] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:15:40.425 [2024-10-15 08:26:42.000217] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:40.425 [2024-10-15 08:26:42.000225] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:40.425 [2024-10-15 08:26:42.000233] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:15:40.425 [2024-10-15 08:26:42.000239] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:15:40.425 [2024-10-15 08:26:42.000245] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:15:40.425 [2024-10-15 08:26:42.000257] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:15:40.425 [2024-10-15 08:26:42.000284] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.000291] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x95a750) 00:15:40.425 [2024-10-15 08:26:42.000304] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.425 [2024-10-15 08:26:42.000317] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.000325] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.000330] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x95a750) 00:15:40.425 [2024-10-15 08:26:42.000337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:40.425 [2024-10-15 08:26:42.000381] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bee40, cid 4, qid 0 00:15:40.425 [2024-10-15 08:26:42.000390] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9befc0, cid 5, qid 0 00:15:40.425 [2024-10-15 08:26:42.000466] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.425 [2024-10-15 08:26:42.000478] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.425 [2024-10-15 08:26:42.000485] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.000492] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bee40) on tqpair=0x95a750 00:15:40.425 [2024-10-15 08:26:42.000502] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.425 [2024-10-15 08:26:42.000508] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.425 [2024-10-15 08:26:42.000512] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.000516] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9befc0) on tqpair=0x95a750 00:15:40.425 [2024-10-15 08:26:42.000529] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.000537] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x95a750) 00:15:40.425 [2024-10-15 08:26:42.000548] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.425 [2024-10-15 08:26:42.000571] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9befc0, cid 5, qid 0 00:15:40.425 [2024-10-15 08:26:42.000621] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.425 [2024-10-15 08:26:42.000633] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.425 [2024-10-15 08:26:42.000639] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.000646] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9befc0) on tqpair=0x95a750 00:15:40.425 [2024-10-15 08:26:42.000661] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.000666] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x95a750) 00:15:40.425 [2024-10-15 08:26:42.000673] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.425 [2024-10-15 08:26:42.000699] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9befc0, cid 5, qid 0 00:15:40.425 [2024-10-15 08:26:42.000751] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.425 [2024-10-15 08:26:42.000763] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.425 [2024-10-15 08:26:42.000769] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.000777] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9befc0) on tqpair=0x95a750 00:15:40.425 [2024-10-15 08:26:42.000791] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.000797] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x95a750) 00:15:40.425 [2024-10-15 08:26:42.000804] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.425 [2024-10-15 08:26:42.000830] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9befc0, cid 5, qid 0 00:15:40.425 [2024-10-15 08:26:42.000882] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.425 [2024-10-15 08:26:42.000893] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.425 [2024-10-15 08:26:42.000900] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.000906] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9befc0) on tqpair=0x95a750 00:15:40.425 [2024-10-15 08:26:42.000932] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.000938] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x95a750) 00:15:40.425 [2024-10-15 08:26:42.000947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.425 [2024-10-15 08:26:42.000960] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.000967] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x95a750) 00:15:40.425 [2024-10-15 08:26:42.000978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.425 [2024-10-15 08:26:42.000992] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.001000] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x95a750) 00:15:40.425 [2024-10-15 08:26:42.001007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.425 [2024-10-15 08:26:42.001017] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.001022] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x95a750) 00:15:40.425 [2024-10-15 08:26:42.001032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.425 [2024-10-15 08:26:42.001066] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9befc0, cid 5, qid 0 00:15:40.425 [2024-10-15 08:26:42.001078] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bee40, cid 4, qid 0 00:15:40.425 [2024-10-15 08:26:42.001084] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bf140, cid 6, qid 0 00:15:40.425 [2024-10-15 08:26:42.001089] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bf2c0, cid 7, qid 0 00:15:40.425 [2024-10-15 08:26:42.001248] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.425 [2024-10-15 08:26:42.001258] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.425 [2024-10-15 08:26:42.001262] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.001266] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x95a750): datao=0, datal=8192, cccid=5 00:15:40.425 [2024-10-15 08:26:42.001273] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9befc0) on tqpair(0x95a750): expected_datao=0, payload_size=8192 00:15:40.425 [2024-10-15 08:26:42.001281] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.001303] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.001309] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.001315] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.425 [2024-10-15 08:26:42.001324] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.425 [2024-10-15 08:26:42.001330] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.001334] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x95a750): datao=0, datal=512, cccid=4 00:15:40.425 [2024-10-15 08:26:42.001340] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9bee40) on tqpair(0x95a750): expected_datao=0, payload_size=512 00:15:40.425 [2024-10-15 08:26:42.001347] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.001358] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.001365] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.001374] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.425 [2024-10-15 08:26:42.001383] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.425 [2024-10-15 08:26:42.001386] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.001390] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x95a750): datao=0, datal=512, cccid=6 00:15:40.425 [2024-10-15 08:26:42.001395] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9bf140) on tqpair(0x95a750): expected_datao=0, payload_size=512 00:15:40.425 [2024-10-15 08:26:42.001400] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.001407] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.001413] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.425 [2024-10-15 08:26:42.001422] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:40.425 [2024-10-15 08:26:42.001430] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:40.425 [2024-10-15 08:26:42.001434] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:40.426 [2024-10-15 08:26:42.001437] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x95a750): datao=0, datal=4096, cccid=7 00:15:40.426 [2024-10-15 08:26:42.001442] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9bf2c0) on tqpair(0x95a750): expected_datao=0, payload_size=4096 00:15:40.426 [2024-10-15 08:26:42.001447] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.426 [2024-10-15 08:26:42.001454] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:40.426 [2024-10-15 08:26:42.001458] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:40.426 [2024-10-15 08:26:42.001463] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.426 [2024-10-15 08:26:42.001472] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.426 [2024-10-15 08:26:42.001478] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.426 [2024-10-15 08:26:42.001485] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9befc0) on tqpair=0x95a750 00:15:40.426 ===================================================== 00:15:40.426 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:40.426 ===================================================== 00:15:40.426 Controller Capabilities/Features 00:15:40.426 ================================ 00:15:40.426 Vendor ID: 8086 00:15:40.426 Subsystem Vendor ID: 8086 00:15:40.426 Serial Number: SPDK00000000000001 00:15:40.426 Model Number: SPDK bdev Controller 00:15:40.426 Firmware Version: 25.01 00:15:40.426 Recommended Arb Burst: 6 00:15:40.426 IEEE OUI Identifier: e4 d2 5c 00:15:40.426 Multi-path I/O 00:15:40.426 May have multiple subsystem ports: Yes 00:15:40.426 May have multiple controllers: Yes 00:15:40.426 Associated with SR-IOV VF: No 00:15:40.426 Max Data Transfer Size: 131072 00:15:40.426 Max Number of Namespaces: 32 00:15:40.426 Max Number of I/O Queues: 127 00:15:40.426 NVMe Specification Version (VS): 1.3 00:15:40.426 NVMe Specification Version (Identify): 1.3 00:15:40.426 Maximum Queue Entries: 128 00:15:40.426 Contiguous Queues Required: Yes 00:15:40.426 Arbitration Mechanisms Supported 00:15:40.426 Weighted Round Robin: Not Supported 00:15:40.426 Vendor Specific: Not Supported 00:15:40.426 Reset Timeout: 15000 ms 00:15:40.426 Doorbell Stride: 4 bytes 00:15:40.426 NVM Subsystem Reset: Not Supported 00:15:40.426 Command Sets Supported 00:15:40.426 NVM Command Set: Supported 00:15:40.426 Boot Partition: Not Supported 00:15:40.426 Memory Page Size Minimum: 4096 bytes 00:15:40.426 Memory Page Size Maximum: 4096 bytes 00:15:40.426 Persistent Memory Region: Not Supported 00:15:40.426 Optional Asynchronous Events Supported 00:15:40.426 Namespace Attribute Notices: Supported 00:15:40.426 Firmware Activation Notices: Not Supported 00:15:40.426 ANA Change Notices: Not Supported 00:15:40.426 PLE Aggregate Log Change Notices: Not Supported 00:15:40.426 LBA Status Info Alert Notices: Not Supported 00:15:40.426 EGE Aggregate Log Change Notices: Not Supported 00:15:40.426 Normal NVM Subsystem Shutdown event: Not Supported 00:15:40.426 Zone Descriptor Change Notices: Not Supported 00:15:40.426 Discovery Log Change Notices: Not Supported 00:15:40.426 Controller Attributes 00:15:40.426 128-bit Host Identifier: Supported 00:15:40.426 Non-Operational Permissive Mode: Not Supported 00:15:40.426 NVM Sets: Not Supported 00:15:40.426 Read Recovery Levels: Not Supported 00:15:40.426 Endurance Groups: Not Supported 00:15:40.426 Predictable Latency Mode: Not Supported 00:15:40.426 Traffic Based Keep ALive: Not Supported 00:15:40.426 Namespace Granularity: Not Supported 00:15:40.426 SQ Associations: Not Supported 00:15:40.426 UUID List: Not Supported 00:15:40.426 Multi-Domain Subsystem: Not Supported 00:15:40.426 Fixed Capacity Management: Not Supported 00:15:40.426 Variable Capacity Management: Not Supported 00:15:40.426 Delete Endurance Group: Not Supported 00:15:40.426 Delete NVM Set: Not Supported 00:15:40.426 Extended LBA Formats Supported: Not Supported 00:15:40.426 Flexible Data Placement Supported: Not Supported 00:15:40.426 00:15:40.426 Controller Memory Buffer Support 00:15:40.426 ================================ 00:15:40.426 Supported: No 00:15:40.426 00:15:40.426 Persistent Memory Region Support 00:15:40.426 ================================ 00:15:40.426 Supported: No 00:15:40.426 00:15:40.426 Admin Command Set Attributes 00:15:40.426 ============================ 00:15:40.426 Security Send/Receive: Not Supported 00:15:40.426 Format NVM: Not Supported 00:15:40.426 Firmware Activate/Download: Not Supported 00:15:40.426 Namespace Management: Not Supported 00:15:40.426 Device Self-Test: Not Supported 00:15:40.426 Directives: Not Supported 00:15:40.426 NVMe-MI: Not Supported 00:15:40.426 Virtualization Management: Not Supported 00:15:40.426 Doorbell Buffer Config: Not Supported 00:15:40.426 Get LBA Status Capability: Not Supported 00:15:40.426 Command & Feature Lockdown Capability: Not Supported 00:15:40.426 Abort Command Limit: 4 00:15:40.426 Async Event Request Limit: 4 00:15:40.426 Number of Firmware Slots: N/A 00:15:40.426 Firmware Slot 1 Read-Only: N/A 00:15:40.426 Firmware Activation Without Reset: [2024-10-15 08:26:42.001510] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.426 [2024-10-15 08:26:42.001519] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.426 [2024-10-15 08:26:42.001522] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.426 [2024-10-15 08:26:42.001527] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bee40) on tqpair=0x95a750 00:15:40.426 [2024-10-15 08:26:42.001546] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.426 [2024-10-15 08:26:42.001556] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.426 [2024-10-15 08:26:42.001560] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.426 [2024-10-15 08:26:42.001564] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bf140) on tqpair=0x95a750 00:15:40.426 [2024-10-15 08:26:42.001572] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.426 [2024-10-15 08:26:42.001578] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.426 [2024-10-15 08:26:42.001581] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.426 [2024-10-15 08:26:42.001585] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bf2c0) on tqpair=0x95a750 00:15:40.426 N/A 00:15:40.426 Multiple Update Detection Support: N/A 00:15:40.426 Firmware Update Granularity: No Information Provided 00:15:40.426 Per-Namespace SMART Log: No 00:15:40.426 Asymmetric Namespace Access Log Page: Not Supported 00:15:40.426 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:40.426 Command Effects Log Page: Supported 00:15:40.426 Get Log Page Extended Data: Supported 00:15:40.426 Telemetry Log Pages: Not Supported 00:15:40.426 Persistent Event Log Pages: Not Supported 00:15:40.426 Supported Log Pages Log Page: May Support 00:15:40.426 Commands Supported & Effects Log Page: Not Supported 00:15:40.426 Feature Identifiers & Effects Log Page:May Support 00:15:40.426 NVMe-MI Commands & Effects Log Page: May Support 00:15:40.426 Data Area 4 for Telemetry Log: Not Supported 00:15:40.426 Error Log Page Entries Supported: 128 00:15:40.426 Keep Alive: Supported 00:15:40.426 Keep Alive Granularity: 10000 ms 00:15:40.426 00:15:40.426 NVM Command Set Attributes 00:15:40.426 ========================== 00:15:40.426 Submission Queue Entry Size 00:15:40.426 Max: 64 00:15:40.426 Min: 64 00:15:40.426 Completion Queue Entry Size 00:15:40.426 Max: 16 00:15:40.426 Min: 16 00:15:40.426 Number of Namespaces: 32 00:15:40.426 Compare Command: Supported 00:15:40.426 Write Uncorrectable Command: Not Supported 00:15:40.426 Dataset Management Command: Supported 00:15:40.426 Write Zeroes Command: Supported 00:15:40.426 Set Features Save Field: Not Supported 00:15:40.426 Reservations: Supported 00:15:40.426 Timestamp: Not Supported 00:15:40.426 Copy: Supported 00:15:40.426 Volatile Write Cache: Present 00:15:40.426 Atomic Write Unit (Normal): 1 00:15:40.426 Atomic Write Unit (PFail): 1 00:15:40.426 Atomic Compare & Write Unit: 1 00:15:40.426 Fused Compare & Write: Supported 00:15:40.426 Scatter-Gather List 00:15:40.426 SGL Command Set: Supported 00:15:40.426 SGL Keyed: Supported 00:15:40.426 SGL Bit Bucket Descriptor: Not Supported 00:15:40.426 SGL Metadata Pointer: Not Supported 00:15:40.426 Oversized SGL: Not Supported 00:15:40.426 SGL Metadata Address: Not Supported 00:15:40.426 SGL Offset: Supported 00:15:40.426 Transport SGL Data Block: Not Supported 00:15:40.426 Replay Protected Memory Block: Not Supported 00:15:40.426 00:15:40.426 Firmware Slot Information 00:15:40.426 ========================= 00:15:40.426 Active slot: 1 00:15:40.426 Slot 1 Firmware Revision: 25.01 00:15:40.426 00:15:40.426 00:15:40.426 Commands Supported and Effects 00:15:40.426 ============================== 00:15:40.426 Admin Commands 00:15:40.426 -------------- 00:15:40.426 Get Log Page (02h): Supported 00:15:40.426 Identify (06h): Supported 00:15:40.426 Abort (08h): Supported 00:15:40.427 Set Features (09h): Supported 00:15:40.427 Get Features (0Ah): Supported 00:15:40.427 Asynchronous Event Request (0Ch): Supported 00:15:40.427 Keep Alive (18h): Supported 00:15:40.427 I/O Commands 00:15:40.427 ------------ 00:15:40.427 Flush (00h): Supported LBA-Change 00:15:40.427 Write (01h): Supported LBA-Change 00:15:40.427 Read (02h): Supported 00:15:40.427 Compare (05h): Supported 00:15:40.427 Write Zeroes (08h): Supported LBA-Change 00:15:40.427 Dataset Management (09h): Supported LBA-Change 00:15:40.427 Copy (19h): Supported LBA-Change 00:15:40.427 00:15:40.427 Error Log 00:15:40.427 ========= 00:15:40.427 00:15:40.427 Arbitration 00:15:40.427 =========== 00:15:40.427 Arbitration Burst: 1 00:15:40.427 00:15:40.427 Power Management 00:15:40.427 ================ 00:15:40.427 Number of Power States: 1 00:15:40.427 Current Power State: Power State #0 00:15:40.427 Power State #0: 00:15:40.427 Max Power: 0.00 W 00:15:40.427 Non-Operational State: Operational 00:15:40.427 Entry Latency: Not Reported 00:15:40.427 Exit Latency: Not Reported 00:15:40.427 Relative Read Throughput: 0 00:15:40.427 Relative Read Latency: 0 00:15:40.427 Relative Write Throughput: 0 00:15:40.427 Relative Write Latency: 0 00:15:40.427 Idle Power: Not Reported 00:15:40.427 Active Power: Not Reported 00:15:40.427 Non-Operational Permissive Mode: Not Supported 00:15:40.427 00:15:40.427 Health Information 00:15:40.427 ================== 00:15:40.427 Critical Warnings: 00:15:40.427 Available Spare Space: OK 00:15:40.427 Temperature: OK 00:15:40.427 Device Reliability: OK 00:15:40.427 Read Only: No 00:15:40.427 Volatile Memory Backup: OK 00:15:40.427 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:40.427 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:40.427 Available Spare: 0% 00:15:40.427 Available Spare Threshold: 0% 00:15:40.427 Life Percentage Used:[2024-10-15 08:26:42.001729] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.427 [2024-10-15 08:26:42.001738] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x95a750) 00:15:40.427 [2024-10-15 08:26:42.001747] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.427 [2024-10-15 08:26:42.001779] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9bf2c0, cid 7, qid 0 00:15:40.427 [2024-10-15 08:26:42.001826] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.427 [2024-10-15 08:26:42.001838] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.427 [2024-10-15 08:26:42.001842] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.427 [2024-10-15 08:26:42.001846] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9bf2c0) on tqpair=0x95a750 00:15:40.427 [2024-10-15 08:26:42.001928] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:15:40.427 [2024-10-15 08:26:42.001955] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9be840) on tqpair=0x95a750 00:15:40.427 [2024-10-15 08:26:42.001969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.427 [2024-10-15 08:26:42.001979] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9be9c0) on tqpair=0x95a750 00:15:40.427 [2024-10-15 08:26:42.001985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.427 [2024-10-15 08:26:42.001990] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9beb40) on tqpair=0x95a750 00:15:40.427 [2024-10-15 08:26:42.001995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.427 [2024-10-15 08:26:42.002001] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9becc0) on tqpair=0x95a750 00:15:40.427 [2024-10-15 08:26:42.002006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:40.427 [2024-10-15 08:26:42.002017] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.427 [2024-10-15 08:26:42.002022] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.427 [2024-10-15 08:26:42.002026] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95a750) 00:15:40.427 [2024-10-15 08:26:42.002035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.427 [2024-10-15 08:26:42.002078] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9becc0, cid 3, qid 0 00:15:40.427 [2024-10-15 08:26:42.002172] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.427 [2024-10-15 08:26:42.002183] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.427 [2024-10-15 08:26:42.002187] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.427 [2024-10-15 08:26:42.002191] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9becc0) on tqpair=0x95a750 00:15:40.427 [2024-10-15 08:26:42.002201] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.427 [2024-10-15 08:26:42.002205] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.427 [2024-10-15 08:26:42.002209] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95a750) 00:15:40.427 [2024-10-15 08:26:42.002218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.427 [2024-10-15 08:26:42.002244] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9becc0, cid 3, qid 0 00:15:40.427 [2024-10-15 08:26:42.002316] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.427 [2024-10-15 08:26:42.002323] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.427 [2024-10-15 08:26:42.002327] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.427 [2024-10-15 08:26:42.002331] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9becc0) on tqpair=0x95a750 00:15:40.427 [2024-10-15 08:26:42.002337] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:15:40.427 [2024-10-15 08:26:42.002342] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:15:40.427 [2024-10-15 08:26:42.002353] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.427 [2024-10-15 08:26:42.002358] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.427 [2024-10-15 08:26:42.002362] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95a750) 00:15:40.427 [2024-10-15 08:26:42.002370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.427 [2024-10-15 08:26:42.002388] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9becc0, cid 3, qid 0 00:15:40.427 [2024-10-15 08:26:42.002438] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.427 [2024-10-15 08:26:42.002445] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.427 [2024-10-15 08:26:42.002449] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.427 [2024-10-15 08:26:42.002453] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9becc0) on tqpair=0x95a750 00:15:40.427 [2024-10-15 08:26:42.002474] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.427 [2024-10-15 08:26:42.002479] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.427 [2024-10-15 08:26:42.002483] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95a750) 00:15:40.427 [2024-10-15 08:26:42.002490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.427 [2024-10-15 08:26:42.002507] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9becc0, cid 3, qid 0 00:15:40.427 [2024-10-15 08:26:42.002555] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.427 [2024-10-15 08:26:42.002562] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.427 [2024-10-15 08:26:42.002566] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.427 [2024-10-15 08:26:42.002570] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9becc0) on tqpair=0x95a750 00:15:40.427 [2024-10-15 08:26:42.002581] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.427 [2024-10-15 08:26:42.002585] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.427 [2024-10-15 08:26:42.002589] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95a750) 00:15:40.427 [2024-10-15 08:26:42.002596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.427 [2024-10-15 08:26:42.002613] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9becc0, cid 3, qid 0 00:15:40.427 [2024-10-15 08:26:42.002655] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.427 [2024-10-15 08:26:42.002662] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.427 [2024-10-15 08:26:42.002666] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.427 [2024-10-15 08:26:42.002670] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9becc0) on tqpair=0x95a750 00:15:40.427 [2024-10-15 08:26:42.002680] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.427 [2024-10-15 08:26:42.002685] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.427 [2024-10-15 08:26:42.002689] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95a750) 00:15:40.427 [2024-10-15 08:26:42.002696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.427 [2024-10-15 08:26:42.002712] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9becc0, cid 3, qid 0 00:15:40.428 [2024-10-15 08:26:42.002763] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.428 [2024-10-15 08:26:42.002770] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.428 [2024-10-15 08:26:42.002774] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.428 [2024-10-15 08:26:42.002778] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9becc0) on tqpair=0x95a750 00:15:40.428 [2024-10-15 08:26:42.002788] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.428 [2024-10-15 08:26:42.002793] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.428 [2024-10-15 08:26:42.002797] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95a750) 00:15:40.428 [2024-10-15 08:26:42.002804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.428 [2024-10-15 08:26:42.002821] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9becc0, cid 3, qid 0 00:15:40.428 [2024-10-15 08:26:42.002869] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.428 [2024-10-15 08:26:42.002885] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.428 [2024-10-15 08:26:42.002890] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.428 [2024-10-15 08:26:42.002894] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9becc0) on tqpair=0x95a750 00:15:40.428 [2024-10-15 08:26:42.002906] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.428 [2024-10-15 08:26:42.002910] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.428 [2024-10-15 08:26:42.002914] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95a750) 00:15:40.428 [2024-10-15 08:26:42.002922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.428 [2024-10-15 08:26:42.002940] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9becc0, cid 3, qid 0 00:15:40.428 [2024-10-15 08:26:42.002991] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.428 [2024-10-15 08:26:42.002998] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.428 [2024-10-15 08:26:42.003001] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.428 [2024-10-15 08:26:42.003006] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9becc0) on tqpair=0x95a750 00:15:40.428 [2024-10-15 08:26:42.003016] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.428 [2024-10-15 08:26:42.003021] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.428 [2024-10-15 08:26:42.003025] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95a750) 00:15:40.428 [2024-10-15 08:26:42.003032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.428 [2024-10-15 08:26:42.003049] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9becc0, cid 3, qid 0 00:15:40.428 [2024-10-15 08:26:42.003099] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.428 [2024-10-15 08:26:42.003106] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.428 [2024-10-15 08:26:42.003110] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.428 [2024-10-15 08:26:42.007133] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9becc0) on tqpair=0x95a750 00:15:40.428 [2024-10-15 08:26:42.007154] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:40.428 [2024-10-15 08:26:42.007160] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:40.428 [2024-10-15 08:26:42.007164] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x95a750) 00:15:40.428 [2024-10-15 08:26:42.007173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:40.428 [2024-10-15 08:26:42.007198] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9becc0, cid 3, qid 0 00:15:40.428 [2024-10-15 08:26:42.007248] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:40.428 [2024-10-15 08:26:42.007255] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:40.428 [2024-10-15 08:26:42.007259] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:40.428 [2024-10-15 08:26:42.007263] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9becc0) on tqpair=0x95a750 00:15:40.428 [2024-10-15 08:26:42.007272] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:15:40.428 0% 00:15:40.428 Data Units Read: 0 00:15:40.428 Data Units Written: 0 00:15:40.428 Host Read Commands: 0 00:15:40.428 Host Write Commands: 0 00:15:40.428 Controller Busy Time: 0 minutes 00:15:40.428 Power Cycles: 0 00:15:40.428 Power On Hours: 0 hours 00:15:40.428 Unsafe Shutdowns: 0 00:15:40.428 Unrecoverable Media Errors: 0 00:15:40.428 Lifetime Error Log Entries: 0 00:15:40.428 Warning Temperature Time: 0 minutes 00:15:40.428 Critical Temperature Time: 0 minutes 00:15:40.428 00:15:40.428 Number of Queues 00:15:40.428 ================ 00:15:40.428 Number of I/O Submission Queues: 127 00:15:40.428 Number of I/O Completion Queues: 127 00:15:40.428 00:15:40.428 Active Namespaces 00:15:40.428 ================= 00:15:40.428 Namespace ID:1 00:15:40.428 Error Recovery Timeout: Unlimited 00:15:40.428 Command Set Identifier: NVM (00h) 00:15:40.428 Deallocate: Supported 00:15:40.428 Deallocated/Unwritten Error: Not Supported 00:15:40.428 Deallocated Read Value: Unknown 00:15:40.428 Deallocate in Write Zeroes: Not Supported 00:15:40.428 Deallocated Guard Field: 0xFFFF 00:15:40.428 Flush: Supported 00:15:40.428 Reservation: Supported 00:15:40.428 Namespace Sharing Capabilities: Multiple Controllers 00:15:40.428 Size (in LBAs): 131072 (0GiB) 00:15:40.428 Capacity (in LBAs): 131072 (0GiB) 00:15:40.428 Utilization (in LBAs): 131072 (0GiB) 00:15:40.428 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:40.428 EUI64: ABCDEF0123456789 00:15:40.428 UUID: e248e407-3b90-414b-b95c-b030b72e26a5 00:15:40.428 Thin Provisioning: Not Supported 00:15:40.428 Per-NS Atomic Units: Yes 00:15:40.428 Atomic Boundary Size (Normal): 0 00:15:40.428 Atomic Boundary Size (PFail): 0 00:15:40.428 Atomic Boundary Offset: 0 00:15:40.428 Maximum Single Source Range Length: 65535 00:15:40.428 Maximum Copy Length: 65535 00:15:40.428 Maximum Source Range Count: 1 00:15:40.428 NGUID/EUI64 Never Reused: No 00:15:40.428 Namespace Write Protected: No 00:15:40.428 Number of LBA Formats: 1 00:15:40.428 Current LBA Format: LBA Format #00 00:15:40.428 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:40.428 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:40.428 rmmod nvme_tcp 00:15:40.428 rmmod nvme_fabrics 00:15:40.428 rmmod nvme_keyring 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 74410 ']' 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 74410 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 74410 ']' 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 74410 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:40.428 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74410 00:15:40.687 killing process with pid 74410 00:15:40.687 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:40.687 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:40.687 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74410' 00:15:40.687 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 74410 00:15:40.687 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 74410 00:15:40.946 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:40.946 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:40.946 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:40.946 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:15:40.946 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:40.946 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:15:40.946 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:15:40.946 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:40.946 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:40.946 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:40.946 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:40.946 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:40.946 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:40.946 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:40.946 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:40.946 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:40.946 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:40.946 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:40.946 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:40.946 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:41.205 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:41.205 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:41.205 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:41.205 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.205 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:41.205 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.205 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:15:41.205 00:15:41.205 real 0m2.436s 00:15:41.205 user 0m5.046s 00:15:41.205 sys 0m0.822s 00:15:41.205 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:41.205 ************************************ 00:15:41.205 08:26:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:41.205 END TEST nvmf_identify 00:15:41.205 ************************************ 00:15:41.205 08:26:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:41.205 08:26:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:41.205 08:26:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:41.205 08:26:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:41.205 ************************************ 00:15:41.205 START TEST nvmf_perf 00:15:41.205 ************************************ 00:15:41.205 08:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:41.205 * Looking for test storage... 00:15:41.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:41.205 08:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:41.205 08:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:15:41.205 08:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:41.464 08:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:41.464 08:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:41.464 08:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:41.464 08:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:41.464 08:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.464 08:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:15:41.464 08:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:15:41.464 08:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:15:41.464 08:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:15:41.464 08:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:15:41.464 08:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:15:41.464 08:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:41.464 08:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:15:41.464 08:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:15:41.464 08:26:42 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:41.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.464 --rc genhtml_branch_coverage=1 00:15:41.464 --rc genhtml_function_coverage=1 00:15:41.464 --rc genhtml_legend=1 00:15:41.464 --rc geninfo_all_blocks=1 00:15:41.464 --rc geninfo_unexecuted_blocks=1 00:15:41.464 00:15:41.464 ' 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:41.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.464 --rc genhtml_branch_coverage=1 00:15:41.464 --rc genhtml_function_coverage=1 00:15:41.464 --rc genhtml_legend=1 00:15:41.464 --rc geninfo_all_blocks=1 00:15:41.464 --rc geninfo_unexecuted_blocks=1 00:15:41.464 00:15:41.464 ' 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:41.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.464 --rc genhtml_branch_coverage=1 00:15:41.464 --rc genhtml_function_coverage=1 00:15:41.464 --rc genhtml_legend=1 00:15:41.464 --rc geninfo_all_blocks=1 00:15:41.464 --rc geninfo_unexecuted_blocks=1 00:15:41.464 00:15:41.464 ' 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:41.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.464 --rc genhtml_branch_coverage=1 00:15:41.464 --rc genhtml_function_coverage=1 00:15:41.464 --rc genhtml_legend=1 00:15:41.464 --rc geninfo_all_blocks=1 00:15:41.464 --rc geninfo_unexecuted_blocks=1 00:15:41.464 00:15:41.464 ' 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.464 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:41.465 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # nvmf_veth_init 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:41.465 Cannot find device "nvmf_init_br" 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:41.465 Cannot find device "nvmf_init_br2" 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:41.465 Cannot find device "nvmf_tgt_br" 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:41.465 Cannot find device "nvmf_tgt_br2" 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:41.465 Cannot find device "nvmf_init_br" 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:41.465 Cannot find device "nvmf_init_br2" 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:41.465 Cannot find device "nvmf_tgt_br" 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:41.465 Cannot find device "nvmf_tgt_br2" 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:41.465 Cannot find device "nvmf_br" 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:41.465 Cannot find device "nvmf_init_if" 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:15:41.465 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:41.723 Cannot find device "nvmf_init_if2" 00:15:41.723 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:15:41.723 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:41.723 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.723 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:15:41.723 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:41.723 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.723 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:15:41.723 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:41.723 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:41.723 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:41.723 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:41.723 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:41.723 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:41.723 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:41.723 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:41.723 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:41.723 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:41.723 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:41.724 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:41.724 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:41.724 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:41.724 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:41.724 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:41.724 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:41.724 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:41.724 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:41.724 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:41.724 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:41.724 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:41.724 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:41.724 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:41.724 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:41.724 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:41.724 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:41.724 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:41.724 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:41.724 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:41.724 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:41.724 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:41.982 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:41.982 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:41.982 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:15:41.982 00:15:41.982 --- 10.0.0.3 ping statistics --- 00:15:41.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.982 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:15:41.982 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:41.982 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:41.982 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:15:41.982 00:15:41.982 --- 10.0.0.4 ping statistics --- 00:15:41.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.982 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:41.982 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:41.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:41.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:15:41.982 00:15:41.982 --- 10.0.0.1 ping statistics --- 00:15:41.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.982 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:15:41.982 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:41.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:41.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:15:41.982 00:15:41.982 --- 10.0.0.2 ping statistics --- 00:15:41.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.982 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:41.982 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:41.982 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # return 0 00:15:41.983 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:41.983 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:41.983 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:41.983 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:41.983 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:41.983 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:41.983 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:41.983 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:41.983 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:41.983 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:41.983 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:41.983 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=74664 00:15:41.983 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:41.983 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 74664 00:15:41.983 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 74664 ']' 00:15:41.983 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.983 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:41.983 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.983 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:41.983 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:41.983 [2024-10-15 08:26:43.570465] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:15:41.983 [2024-10-15 08:26:43.570582] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.242 [2024-10-15 08:26:43.714497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:42.242 [2024-10-15 08:26:43.799085] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.242 [2024-10-15 08:26:43.799441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.242 [2024-10-15 08:26:43.799671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.242 [2024-10-15 08:26:43.799885] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.242 [2024-10-15 08:26:43.800007] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.242 [2024-10-15 08:26:43.801631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.242 [2024-10-15 08:26:43.801687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.242 [2024-10-15 08:26:43.801755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:42.242 [2024-10-15 08:26:43.801759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.242 [2024-10-15 08:26:43.877712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:42.242 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:42.242 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:15:42.242 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:42.242 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:42.242 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:42.501 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.501 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:42.501 08:26:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:42.760 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:42.760 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:43.019 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:43.019 08:26:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:43.586 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:43.586 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:43.586 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:43.586 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:43.586 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:43.844 [2024-10-15 08:26:45.372673] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.844 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:44.102 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:44.102 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:44.361 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:44.361 08:26:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:44.619 08:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:44.878 [2024-10-15 08:26:46.503172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:44.878 08:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:45.136 08:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:45.136 08:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:45.136 08:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:45.136 08:26:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:46.534 Initializing NVMe Controllers 00:15:46.534 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:46.534 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:46.534 Initialization complete. Launching workers. 00:15:46.534 ======================================================== 00:15:46.534 Latency(us) 00:15:46.534 Device Information : IOPS MiB/s Average min max 00:15:46.534 PCIE (0000:00:10.0) NSID 1 from core 0: 21952.00 85.75 1457.74 385.05 7638.63 00:15:46.534 ======================================================== 00:15:46.534 Total : 21952.00 85.75 1457.74 385.05 7638.63 00:15:46.534 00:15:46.534 08:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:47.500 Initializing NVMe Controllers 00:15:47.500 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:47.500 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:47.500 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:47.500 Initialization complete. Launching workers. 00:15:47.500 ======================================================== 00:15:47.500 Latency(us) 00:15:47.500 Device Information : IOPS MiB/s Average min max 00:15:47.500 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3563.75 13.92 280.29 107.92 5171.27 00:15:47.500 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.75 0.48 8144.40 5016.31 12029.60 00:15:47.500 ======================================================== 00:15:47.500 Total : 3687.50 14.40 544.20 107.92 12029.60 00:15:47.500 00:15:47.759 08:26:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:49.136 Initializing NVMe Controllers 00:15:49.136 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:49.136 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:49.136 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:49.136 Initialization complete. Launching workers. 00:15:49.136 ======================================================== 00:15:49.136 Latency(us) 00:15:49.136 Device Information : IOPS MiB/s Average min max 00:15:49.136 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8587.07 33.54 3726.57 635.75 10692.07 00:15:49.136 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3761.14 14.69 8507.36 6761.04 16843.78 00:15:49.136 ======================================================== 00:15:49.136 Total : 12348.21 48.24 5182.75 635.75 16843.78 00:15:49.136 00:15:49.136 08:26:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:49.136 08:26:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:51.667 Initializing NVMe Controllers 00:15:51.667 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:51.667 Controller IO queue size 128, less than required. 00:15:51.667 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:51.667 Controller IO queue size 128, less than required. 00:15:51.667 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:51.667 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:51.667 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:51.667 Initialization complete. Launching workers. 00:15:51.667 ======================================================== 00:15:51.667 Latency(us) 00:15:51.667 Device Information : IOPS MiB/s Average min max 00:15:51.667 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1455.19 363.80 89747.65 41371.12 132467.36 00:15:51.667 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 626.57 156.64 213741.71 54480.20 358354.66 00:15:51.667 ======================================================== 00:15:51.667 Total : 2081.76 520.44 127067.70 41371.12 358354.66 00:15:51.667 00:15:51.667 08:26:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:15:51.925 Initializing NVMe Controllers 00:15:51.925 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:51.925 Controller IO queue size 128, less than required. 00:15:51.925 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:51.925 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:51.925 Controller IO queue size 128, less than required. 00:15:51.925 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:51.925 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:51.925 WARNING: Some requested NVMe devices were skipped 00:15:51.925 No valid NVMe controllers or AIO or URING devices found 00:15:51.925 08:26:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:15:54.456 Initializing NVMe Controllers 00:15:54.456 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:54.456 Controller IO queue size 128, less than required. 00:15:54.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:54.456 Controller IO queue size 128, less than required. 00:15:54.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:54.456 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:54.456 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:54.456 Initialization complete. Launching workers. 00:15:54.456 00:15:54.456 ==================== 00:15:54.456 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:54.456 TCP transport: 00:15:54.456 polls: 8307 00:15:54.456 idle_polls: 4775 00:15:54.456 sock_completions: 3532 00:15:54.456 nvme_completions: 5693 00:15:54.456 submitted_requests: 8534 00:15:54.456 queued_requests: 1 00:15:54.456 00:15:54.456 ==================== 00:15:54.456 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:54.456 TCP transport: 00:15:54.456 polls: 11054 00:15:54.456 idle_polls: 7166 00:15:54.456 sock_completions: 3888 00:15:54.456 nvme_completions: 6061 00:15:54.456 submitted_requests: 9042 00:15:54.456 queued_requests: 1 00:15:54.456 ======================================================== 00:15:54.456 Latency(us) 00:15:54.456 Device Information : IOPS MiB/s Average min max 00:15:54.456 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1420.05 355.01 92141.54 44403.48 150006.31 00:15:54.456 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1511.86 377.97 86114.60 43928.08 146349.38 00:15:54.456 ======================================================== 00:15:54.456 Total : 2931.92 732.98 89033.71 43928.08 150006.31 00:15:54.456 00:15:54.456 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:54.714 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:54.971 rmmod nvme_tcp 00:15:54.971 rmmod nvme_fabrics 00:15:54.971 rmmod nvme_keyring 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 74664 ']' 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 74664 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 74664 ']' 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 74664 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74664 00:15:54.971 killing process with pid 74664 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74664' 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 74664 00:15:54.971 08:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 74664 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:15:55.905 ************************************ 00:15:55.905 END TEST nvmf_perf 00:15:55.905 ************************************ 00:15:55.905 00:15:55.905 real 0m14.812s 00:15:55.905 user 0m52.398s 00:15:55.905 sys 0m4.414s 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:55.905 08:26:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.164 ************************************ 00:15:56.164 START TEST nvmf_fio_host 00:15:56.164 ************************************ 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:56.164 * Looking for test storage... 00:15:56.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:56.164 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:56.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.164 --rc genhtml_branch_coverage=1 00:15:56.164 --rc genhtml_function_coverage=1 00:15:56.164 --rc genhtml_legend=1 00:15:56.164 --rc geninfo_all_blocks=1 00:15:56.164 --rc geninfo_unexecuted_blocks=1 00:15:56.164 00:15:56.165 ' 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:56.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.165 --rc genhtml_branch_coverage=1 00:15:56.165 --rc genhtml_function_coverage=1 00:15:56.165 --rc genhtml_legend=1 00:15:56.165 --rc geninfo_all_blocks=1 00:15:56.165 --rc geninfo_unexecuted_blocks=1 00:15:56.165 00:15:56.165 ' 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:56.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.165 --rc genhtml_branch_coverage=1 00:15:56.165 --rc genhtml_function_coverage=1 00:15:56.165 --rc genhtml_legend=1 00:15:56.165 --rc geninfo_all_blocks=1 00:15:56.165 --rc geninfo_unexecuted_blocks=1 00:15:56.165 00:15:56.165 ' 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:56.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.165 --rc genhtml_branch_coverage=1 00:15:56.165 --rc genhtml_function_coverage=1 00:15:56.165 --rc genhtml_legend=1 00:15:56.165 --rc geninfo_all_blocks=1 00:15:56.165 --rc geninfo_unexecuted_blocks=1 00:15:56.165 00:15:56.165 ' 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.165 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:56.425 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # nvmf_veth_init 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:56.425 Cannot find device "nvmf_init_br" 00:15:56.425 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:56.426 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:56.426 Cannot find device "nvmf_init_br2" 00:15:56.426 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:56.426 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:56.426 Cannot find device "nvmf_tgt_br" 00:15:56.426 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:15:56.426 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.426 Cannot find device "nvmf_tgt_br2" 00:15:56.426 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:15:56.426 08:26:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:56.426 Cannot find device "nvmf_init_br" 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:56.426 Cannot find device "nvmf_init_br2" 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:56.426 Cannot find device "nvmf_tgt_br" 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:56.426 Cannot find device "nvmf_tgt_br2" 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:56.426 Cannot find device "nvmf_br" 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:56.426 Cannot find device "nvmf_init_if" 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:56.426 Cannot find device "nvmf_init_if2" 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.426 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.426 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:56.426 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:56.684 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:56.684 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:15:56.684 00:15:56.684 --- 10.0.0.3 ping statistics --- 00:15:56.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.684 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:56.684 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:56.685 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:56.685 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:15:56.685 00:15:56.685 --- 10.0.0.4 ping statistics --- 00:15:56.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.685 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:56.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:56.685 00:15:56.685 --- 10.0.0.1 ping statistics --- 00:15:56.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.685 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:56.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:15:56.685 00:15:56.685 --- 10.0.0.2 ping statistics --- 00:15:56.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.685 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # return 0 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75118 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75118 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 75118 ']' 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:56.685 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.944 [2024-10-15 08:26:58.457004] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:15:56.944 [2024-10-15 08:26:58.457110] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.944 [2024-10-15 08:26:58.597549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:57.203 [2024-10-15 08:26:58.679193] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.203 [2024-10-15 08:26:58.679276] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.203 [2024-10-15 08:26:58.679288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.203 [2024-10-15 08:26:58.679297] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.203 [2024-10-15 08:26:58.679305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.203 [2024-10-15 08:26:58.680732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.203 [2024-10-15 08:26:58.680864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.203 [2024-10-15 08:26:58.680987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.203 [2024-10-15 08:26:58.680988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:57.203 [2024-10-15 08:26:58.755188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:57.203 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:57.203 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:15:57.203 08:26:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:57.461 [2024-10-15 08:26:59.125834] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.461 08:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:57.461 08:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:57.461 08:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.462 08:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:58.071 Malloc1 00:15:58.071 08:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:58.330 08:26:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:58.591 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:58.850 [2024-10-15 08:27:00.395819] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:58.850 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:59.108 08:27:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:59.367 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:59.367 fio-3.35 00:15:59.367 Starting 1 thread 00:16:01.952 00:16:01.952 test: (groupid=0, jobs=1): err= 0: pid=75193: Tue Oct 15 08:27:03 2024 00:16:01.952 read: IOPS=8435, BW=33.0MiB/s (34.6MB/s)(66.1MiB/2007msec) 00:16:01.952 slat (nsec): min=1996, max=286468, avg=2797.29, stdev=3418.64 00:16:01.952 clat (usec): min=1943, max=14068, avg=7910.04, stdev=620.52 00:16:01.952 lat (usec): min=1980, max=14071, avg=7912.84, stdev=620.28 00:16:01.952 clat percentiles (usec): 00:16:01.952 | 1.00th=[ 6652], 5.00th=[ 7046], 10.00th=[ 7242], 20.00th=[ 7439], 00:16:01.952 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8029], 00:16:01.952 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 8848], 00:16:01.952 | 99.00th=[ 9372], 99.50th=[ 9634], 99.90th=[12387], 99.95th=[13566], 00:16:01.952 | 99.99th=[13829] 00:16:01.952 bw ( KiB/s): min=31840, max=35152, per=99.90%, avg=33708.00, stdev=1527.11, samples=4 00:16:01.952 iops : min= 7960, max= 8788, avg=8427.00, stdev=381.78, samples=4 00:16:01.952 write: IOPS=8428, BW=32.9MiB/s (34.5MB/s)(66.1MiB/2007msec); 0 zone resets 00:16:01.952 slat (usec): min=2, max=169, avg= 2.87, stdev= 1.73 00:16:01.952 clat (usec): min=1858, max=13474, avg=7236.06, stdev=571.29 00:16:01.952 lat (usec): min=1869, max=13477, avg=7238.94, stdev=571.22 00:16:01.952 clat percentiles (usec): 00:16:01.952 | 1.00th=[ 6063], 5.00th=[ 6456], 10.00th=[ 6587], 20.00th=[ 6849], 00:16:01.952 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7308], 00:16:01.952 | 70.00th=[ 7504], 80.00th=[ 7635], 90.00th=[ 7898], 95.00th=[ 8029], 00:16:01.952 | 99.00th=[ 8586], 99.50th=[ 8979], 99.90th=[11207], 99.95th=[12518], 00:16:01.952 | 99.99th=[13435] 00:16:01.952 bw ( KiB/s): min=32582, max=34816, per=99.95%, avg=33697.50, stdev=1185.71, samples=4 00:16:01.952 iops : min= 8145, max= 8704, avg=8424.25, stdev=296.59, samples=4 00:16:01.952 lat (msec) : 2=0.01%, 4=0.12%, 10=99.59%, 20=0.28% 00:16:01.952 cpu : usr=66.40%, sys=24.63%, ctx=45, majf=0, minf=6 00:16:01.952 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:01.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:01.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:01.952 issued rwts: total=16930,16916,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:01.952 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:01.952 00:16:01.952 Run status group 0 (all jobs): 00:16:01.952 READ: bw=33.0MiB/s (34.6MB/s), 33.0MiB/s-33.0MiB/s (34.6MB/s-34.6MB/s), io=66.1MiB (69.3MB), run=2007-2007msec 00:16:01.952 WRITE: bw=32.9MiB/s (34.5MB/s), 32.9MiB/s-32.9MiB/s (34.5MB/s-34.5MB/s), io=66.1MiB (69.3MB), run=2007-2007msec 00:16:01.952 08:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:01.952 08:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:01.952 08:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:01.952 08:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:01.952 08:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:01.952 08:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:01.952 08:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:16:01.952 08:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:01.952 08:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:01.952 08:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:16:01.952 08:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:01.952 08:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:01.952 08:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:01.952 08:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:01.952 08:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:01.952 08:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:01.952 08:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:16:01.952 08:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:01.952 08:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:01.952 08:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:01.952 08:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:01.952 08:27:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:01.952 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:16:01.952 fio-3.35 00:16:01.952 Starting 1 thread 00:16:04.486 00:16:04.486 test: (groupid=0, jobs=1): err= 0: pid=75243: Tue Oct 15 08:27:05 2024 00:16:04.486 read: IOPS=8000, BW=125MiB/s (131MB/s)(251MiB/2007msec) 00:16:04.486 slat (usec): min=3, max=220, avg= 3.94, stdev= 2.52 00:16:04.486 clat (usec): min=2608, max=17344, avg=8832.09, stdev=2559.50 00:16:04.486 lat (usec): min=2611, max=17347, avg=8836.03, stdev=2559.60 00:16:04.486 clat percentiles (usec): 00:16:04.486 | 1.00th=[ 4293], 5.00th=[ 5080], 10.00th=[ 5669], 20.00th=[ 6521], 00:16:04.486 | 30.00th=[ 7308], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 9241], 00:16:04.486 | 70.00th=[10028], 80.00th=[10945], 90.00th=[12256], 95.00th=[13566], 00:16:04.486 | 99.00th=[16057], 99.50th=[16450], 99.90th=[16909], 99.95th=[17171], 00:16:04.486 | 99.99th=[17433] 00:16:04.486 bw ( KiB/s): min=59424, max=72064, per=51.10%, avg=65408.00, stdev=5403.30, samples=4 00:16:04.486 iops : min= 3714, max= 4504, avg=4088.00, stdev=337.71, samples=4 00:16:04.486 write: IOPS=4664, BW=72.9MiB/s (76.4MB/s)(134MiB/1840msec); 0 zone resets 00:16:04.486 slat (usec): min=33, max=254, avg=39.77, stdev= 7.42 00:16:04.486 clat (usec): min=2838, max=23686, avg=12656.77, stdev=2406.74 00:16:04.486 lat (usec): min=2874, max=23725, avg=12696.54, stdev=2408.25 00:16:04.486 clat percentiles (usec): 00:16:04.486 | 1.00th=[ 7963], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10683], 00:16:04.486 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12387], 60.00th=[13042], 00:16:04.486 | 70.00th=[13698], 80.00th=[14615], 90.00th=[15795], 95.00th=[16581], 00:16:04.486 | 99.00th=[19792], 99.50th=[20579], 99.90th=[22414], 99.95th=[22938], 00:16:04.486 | 99.99th=[23725] 00:16:04.486 bw ( KiB/s): min=62336, max=75552, per=91.46%, avg=68264.00, stdev=5734.42, samples=4 00:16:04.486 iops : min= 3896, max= 4722, avg=4266.50, stdev=358.40, samples=4 00:16:04.486 lat (msec) : 4=0.44%, 10=48.86%, 20=50.44%, 50=0.26% 00:16:04.486 cpu : usr=78.62%, sys=16.04%, ctx=436, majf=0, minf=11 00:16:04.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:16:04.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.486 issued rwts: total=16057,8583,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.486 00:16:04.486 Run status group 0 (all jobs): 00:16:04.486 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=251MiB (263MB), run=2007-2007msec 00:16:04.486 WRITE: bw=72.9MiB/s (76.4MB/s), 72.9MiB/s-72.9MiB/s (76.4MB/s-76.4MB/s), io=134MiB (141MB), run=1840-1840msec 00:16:04.486 08:27:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:04.486 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:16:04.486 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:04.486 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:04.486 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:16:04.486 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:04.486 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:16:04.486 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:04.486 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:16:04.486 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:04.486 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:04.486 rmmod nvme_tcp 00:16:04.486 rmmod nvme_fabrics 00:16:04.486 rmmod nvme_keyring 00:16:04.486 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:04.486 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:16:04.486 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:16:04.486 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 75118 ']' 00:16:04.486 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 75118 00:16:04.486 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 75118 ']' 00:16:04.486 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 75118 00:16:04.486 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:16:04.486 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:04.486 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75118 00:16:04.745 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:04.745 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:04.745 killing process with pid 75118 00:16:04.745 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75118' 00:16:04.745 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 75118 00:16:04.745 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 75118 00:16:05.003 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:05.003 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:05.003 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:05.003 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:16:05.003 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:05.003 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:16:05.003 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:16:05.003 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:05.003 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:05.003 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:05.003 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:05.003 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:05.003 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:05.003 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:05.003 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:05.003 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:05.003 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:05.003 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:05.003 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:05.003 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:05.003 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:05.262 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:05.262 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:05.262 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.262 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:05.262 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.262 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:16:05.262 00:16:05.262 real 0m9.126s 00:16:05.262 user 0m35.647s 00:16:05.262 sys 0m2.632s 00:16:05.262 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:05.262 08:27:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.262 ************************************ 00:16:05.262 END TEST nvmf_fio_host 00:16:05.262 ************************************ 00:16:05.262 08:27:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:05.262 08:27:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:05.262 08:27:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:05.262 08:27:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.262 ************************************ 00:16:05.262 START TEST nvmf_failover 00:16:05.262 ************************************ 00:16:05.262 08:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:05.262 * Looking for test storage... 00:16:05.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:05.262 08:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:05.262 08:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:05.262 08:27:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:05.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.522 --rc genhtml_branch_coverage=1 00:16:05.522 --rc genhtml_function_coverage=1 00:16:05.522 --rc genhtml_legend=1 00:16:05.522 --rc geninfo_all_blocks=1 00:16:05.522 --rc geninfo_unexecuted_blocks=1 00:16:05.522 00:16:05.522 ' 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:05.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.522 --rc genhtml_branch_coverage=1 00:16:05.522 --rc genhtml_function_coverage=1 00:16:05.522 --rc genhtml_legend=1 00:16:05.522 --rc geninfo_all_blocks=1 00:16:05.522 --rc geninfo_unexecuted_blocks=1 00:16:05.522 00:16:05.522 ' 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:05.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.522 --rc genhtml_branch_coverage=1 00:16:05.522 --rc genhtml_function_coverage=1 00:16:05.522 --rc genhtml_legend=1 00:16:05.522 --rc geninfo_all_blocks=1 00:16:05.522 --rc geninfo_unexecuted_blocks=1 00:16:05.522 00:16:05.522 ' 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:05.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.522 --rc genhtml_branch_coverage=1 00:16:05.522 --rc genhtml_function_coverage=1 00:16:05.522 --rc genhtml_legend=1 00:16:05.522 --rc geninfo_all_blocks=1 00:16:05.522 --rc geninfo_unexecuted_blocks=1 00:16:05.522 00:16:05.522 ' 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:05.522 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:05.522 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # nvmf_veth_init 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:05.523 Cannot find device "nvmf_init_br" 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:05.523 Cannot find device "nvmf_init_br2" 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:05.523 Cannot find device "nvmf_tgt_br" 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:05.523 Cannot find device "nvmf_tgt_br2" 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:05.523 Cannot find device "nvmf_init_br" 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:05.523 Cannot find device "nvmf_init_br2" 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:05.523 Cannot find device "nvmf_tgt_br" 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:05.523 Cannot find device "nvmf_tgt_br2" 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:05.523 Cannot find device "nvmf_br" 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:05.523 Cannot find device "nvmf_init_if" 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:05.523 Cannot find device "nvmf_init_if2" 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:05.523 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:05.523 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:05.523 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:05.807 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:05.807 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:05.807 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.124 ms 00:16:05.807 00:16:05.807 --- 10.0.0.3 ping statistics --- 00:16:05.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.808 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:05.808 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:05.808 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:16:05.808 00:16:05.808 --- 10.0.0.4 ping statistics --- 00:16:05.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.808 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:05.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:05.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:05.808 00:16:05.808 --- 10.0.0.1 ping statistics --- 00:16:05.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.808 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:05.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:05.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:16:05.808 00:16:05.808 --- 10.0.0.2 ping statistics --- 00:16:05.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.808 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # return 0 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=75508 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 75508 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75508 ']' 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:05.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:05.808 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:06.067 [2024-10-15 08:27:07.556443] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:16:06.067 [2024-10-15 08:27:07.556570] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.067 [2024-10-15 08:27:07.700655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:06.325 [2024-10-15 08:27:07.803231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.325 [2024-10-15 08:27:07.803303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.325 [2024-10-15 08:27:07.803318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:06.325 [2024-10-15 08:27:07.803329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:06.325 [2024-10-15 08:27:07.803338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.325 [2024-10-15 08:27:07.804890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.325 [2024-10-15 08:27:07.805046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:06.325 [2024-10-15 08:27:07.805053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.325 [2024-10-15 08:27:07.882634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:06.325 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:06.325 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:16:06.325 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:06.325 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:06.325 08:27:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:06.325 08:27:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.325 08:27:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:06.892 [2024-10-15 08:27:08.317744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.892 08:27:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:06.892 Malloc0 00:16:07.150 08:27:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:07.408 08:27:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:07.666 08:27:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:07.924 [2024-10-15 08:27:09.429408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:07.924 08:27:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:08.182 [2024-10-15 08:27:09.697568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:08.182 08:27:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:08.440 [2024-10-15 08:27:10.009846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:16:08.440 08:27:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75558 00:16:08.440 08:27:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:08.440 08:27:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:16:08.440 08:27:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75558 /var/tmp/bdevperf.sock 00:16:08.440 08:27:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75558 ']' 00:16:08.440 08:27:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:08.440 08:27:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:08.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:08.440 08:27:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:08.440 08:27:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:08.440 08:27:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:09.005 08:27:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:09.005 08:27:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:16:09.005 08:27:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:09.263 NVMe0n1 00:16:09.263 08:27:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:09.521 00:16:09.521 08:27:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75574 00:16:09.521 08:27:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:09.521 08:27:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:16:10.454 08:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:10.712 08:27:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:16:14.004 08:27:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:14.261 00:16:14.261 08:27:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:14.520 08:27:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:16:17.795 08:27:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:17.795 [2024-10-15 08:27:19.410226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:17.795 08:27:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:16:18.729 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:18.988 08:27:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75574 00:16:25.576 { 00:16:25.576 "results": [ 00:16:25.576 { 00:16:25.576 "job": "NVMe0n1", 00:16:25.576 "core_mask": "0x1", 00:16:25.576 "workload": "verify", 00:16:25.576 "status": "finished", 00:16:25.576 "verify_range": { 00:16:25.576 "start": 0, 00:16:25.576 "length": 16384 00:16:25.576 }, 00:16:25.576 "queue_depth": 128, 00:16:25.576 "io_size": 4096, 00:16:25.576 "runtime": 15.010411, 00:16:25.576 "iops": 8915.27886877981, 00:16:25.576 "mibps": 34.82530808117113, 00:16:25.576 "io_failed": 3341, 00:16:25.576 "io_timeout": 0, 00:16:25.576 "avg_latency_us": 13975.476201387466, 00:16:25.576 "min_latency_us": 659.0836363636364, 00:16:25.576 "max_latency_us": 30980.654545454545 00:16:25.576 } 00:16:25.576 ], 00:16:25.576 "core_count": 1 00:16:25.576 } 00:16:25.576 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75558 00:16:25.576 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75558 ']' 00:16:25.576 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75558 00:16:25.576 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:16:25.576 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:25.576 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75558 00:16:25.576 killing process with pid 75558 00:16:25.576 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:25.576 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:25.576 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75558' 00:16:25.576 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75558 00:16:25.576 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75558 00:16:25.576 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:25.576 [2024-10-15 08:27:10.096758] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:16:25.576 [2024-10-15 08:27:10.096886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75558 ] 00:16:25.576 [2024-10-15 08:27:10.233083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.576 [2024-10-15 08:27:10.310836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.576 [2024-10-15 08:27:10.384268] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:25.576 Running I/O for 15 seconds... 00:16:25.576 6820.00 IOPS, 26.64 MiB/s [2024-10-15T08:27:27.307Z] [2024-10-15 08:27:12.395033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.576 [2024-10-15 08:27:12.395128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.576 [2024-10-15 08:27:12.395170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.576 [2024-10-15 08:27:12.395188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.576 [2024-10-15 08:27:12.395205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.576 [2024-10-15 08:27:12.395221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.576 [2024-10-15 08:27:12.395238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.576 [2024-10-15 08:27:12.395253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.576 [2024-10-15 08:27:12.395269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.576 [2024-10-15 08:27:12.395284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.576 [2024-10-15 08:27:12.395300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.576 [2024-10-15 08:27:12.395315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.576 [2024-10-15 08:27:12.395340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.576 [2024-10-15 08:27:12.395355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.576 [2024-10-15 08:27:12.395371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.576 [2024-10-15 08:27:12.395386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.576 [2024-10-15 08:27:12.395402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.576 [2024-10-15 08:27:12.395417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.576 [2024-10-15 08:27:12.395434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.577 [2024-10-15 08:27:12.395449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.577 [2024-10-15 08:27:12.395464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644010 is same with the state(6) to be set 00:16:25.577 [2024-10-15 08:27:12.395518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.577 [2024-10-15 08:27:12.395532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.577 [2024-10-15 08:27:12.395543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62624 len:8 PRP1 0x0 PRP2 0x0 00:16:25.577 [2024-10-15 08:27:12.395558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.577 [2024-10-15 08:27:12.395574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.577 [2024-10-15 08:27:12.395585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.577 [2024-10-15 08:27:12.395596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62752 len:8 PRP1 0x0 PRP2 0x0 00:16:25.577 [2024-10-15 08:27:12.395611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.577 [2024-10-15 08:27:12.395625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.577 [2024-10-15 08:27:12.395636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.577 [2024-10-15 08:27:12.395647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62760 len:8 PRP1 0x0 PRP2 0x0 00:16:25.577 [2024-10-15 08:27:12.395675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.577 [2024-10-15 08:27:12.395690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.577 [2024-10-15 08:27:12.395703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.577 [2024-10-15 08:27:12.395714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62768 len:8 PRP1 0x0 PRP2 0x0 00:16:25.577 [2024-10-15 08:27:12.395728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.577 [2024-10-15 08:27:12.395742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.577 [2024-10-15 08:27:12.395753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.577 [2024-10-15 08:27:12.395764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62776 len:8 PRP1 0x0 PRP2 0x0 00:16:25.577 [2024-10-15 08:27:12.395779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.577 [2024-10-15 08:27:12.395793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.577 [2024-10-15 08:27:12.395804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.577 [2024-10-15 08:27:12.395815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62784 len:8 PRP1 0x0 PRP2 0x0 00:16:25.577 [2024-10-15 08:27:12.395830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.577 [2024-10-15 08:27:12.395844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.577 [2024-10-15 08:27:12.395856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.577 [2024-10-15 08:27:12.395866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62792 len:8 PRP1 0x0 PRP2 0x0 00:16:25.577 [2024-10-15 08:27:12.395880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.577 [2024-10-15 08:27:12.395895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.577 [2024-10-15 08:27:12.395905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.577 [2024-10-15 08:27:12.395916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62800 len:8 PRP1 0x0 PRP2 0x0 00:16:25.577 [2024-10-15 08:27:12.395940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.577 [2024-10-15 08:27:12.395955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.577 [2024-10-15 08:27:12.395967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.577 [2024-10-15 08:27:12.395978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62808 len:8 PRP1 0x0 PRP2 0x0 00:16:25.577 [2024-10-15 08:27:12.395992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.577 [2024-10-15 08:27:12.396006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.577 [2024-10-15 08:27:12.396017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.577 [2024-10-15 08:27:12.396028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62816 len:8 PRP1 0x0 PRP2 0x0 00:16:25.577 [2024-10-15 08:27:12.396042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.577 [2024-10-15 08:27:12.396057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.577 [2024-10-15 08:27:12.396068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.577 [2024-10-15 08:27:12.396079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62824 len:8 PRP1 0x0 PRP2 0x0 00:16:25.577 [2024-10-15 08:27:12.396099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.577 [2024-10-15 08:27:12.396113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.577 [2024-10-15 08:27:12.396138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.577 [2024-10-15 08:27:12.396149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62832 len:8 PRP1 0x0 PRP2 0x0 00:16:25.577 [2024-10-15 08:27:12.396163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.577 [2024-10-15 08:27:12.396178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.577 [2024-10-15 08:27:12.396189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.577 [2024-10-15 08:27:12.396201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62840 len:8 PRP1 0x0 PRP2 0x0 00:16:25.577 [2024-10-15 08:27:12.396215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.577 [2024-10-15 08:27:12.396229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.577 [2024-10-15 08:27:12.396246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.577 [2024-10-15 08:27:12.396257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62848 len:8 PRP1 0x0 PRP2 0x0 00:16:25.577 [2024-10-15 08:27:12.396272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.577 [2024-10-15 08:27:12.396287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.577 [2024-10-15 08:27:12.396298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.577 [2024-10-15 08:27:12.396312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62856 len:8 PRP1 0x0 PRP2 0x0 00:16:25.577 [2024-10-15 08:27:12.396336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.577 [2024-10-15 08:27:12.396350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.577 [2024-10-15 08:27:12.396361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.577 [2024-10-15 08:27:12.396381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62864 len:8 PRP1 0x0 PRP2 0x0 00:16:25.577 [2024-10-15 08:27:12.396396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.577 [2024-10-15 08:27:12.396411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.577 [2024-10-15 08:27:12.396422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.577 [2024-10-15 08:27:12.396433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62872 len:8 PRP1 0x0 PRP2 0x0 00:16:25.577 [2024-10-15 08:27:12.396448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.577 [2024-10-15 08:27:12.396463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.577 [2024-10-15 08:27:12.396474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.577 [2024-10-15 08:27:12.396485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62880 len:8 PRP1 0x0 PRP2 0x0 00:16:25.577 [2024-10-15 08:27:12.396499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.577 [2024-10-15 08:27:12.396513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.577 [2024-10-15 08:27:12.396524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.577 [2024-10-15 08:27:12.396536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62888 len:8 PRP1 0x0 PRP2 0x0 00:16:25.577 [2024-10-15 08:27:12.396556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.577 [2024-10-15 08:27:12.396570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.577 [2024-10-15 08:27:12.396582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.577 [2024-10-15 08:27:12.396592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62896 len:8 PRP1 0x0 PRP2 0x0 00:16:25.577 [2024-10-15 08:27:12.396607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.577 [2024-10-15 08:27:12.396621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.577 [2024-10-15 08:27:12.396632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.577 [2024-10-15 08:27:12.396643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62904 len:8 PRP1 0x0 PRP2 0x0 00:16:25.577 [2024-10-15 08:27:12.396667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.577 [2024-10-15 08:27:12.396682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.577 [2024-10-15 08:27:12.396694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.577 [2024-10-15 08:27:12.396705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62912 len:8 PRP1 0x0 PRP2 0x0 00:16:25.578 [2024-10-15 08:27:12.396719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.578 [2024-10-15 08:27:12.396733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.578 [2024-10-15 08:27:12.396744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.578 [2024-10-15 08:27:12.396755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62920 len:8 PRP1 0x0 PRP2 0x0 00:16:25.578 [2024-10-15 08:27:12.396769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.578 [2024-10-15 08:27:12.396795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.578 [2024-10-15 08:27:12.396807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.578 [2024-10-15 08:27:12.396818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62928 len:8 PRP1 0x0 PRP2 0x0 00:16:25.578 [2024-10-15 08:27:12.396832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.578 [2024-10-15 08:27:12.396847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.578 [2024-10-15 08:27:12.396858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.578 [2024-10-15 08:27:12.396869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62936 len:8 PRP1 0x0 PRP2 0x0 00:16:25.578 [2024-10-15 08:27:12.396882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.578 [2024-10-15 08:27:12.396897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.578 [2024-10-15 08:27:12.396909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.578 [2024-10-15 08:27:12.396927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62944 len:8 PRP1 0x0 PRP2 0x0 00:16:25.578 [2024-10-15 08:27:12.396941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.578 [2024-10-15 08:27:12.396955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.578 [2024-10-15 08:27:12.396966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.578 [2024-10-15 08:27:12.396977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62952 len:8 PRP1 0x0 PRP2 0x0 00:16:25.578 [2024-10-15 08:27:12.396996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.578 [2024-10-15 08:27:12.397012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.578 [2024-10-15 08:27:12.397023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.578 [2024-10-15 08:27:12.397034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62960 len:8 PRP1 0x0 PRP2 0x0 00:16:25.578 [2024-10-15 08:27:12.397048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.578 [2024-10-15 08:27:12.397062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.578 [2024-10-15 08:27:12.397074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.578 [2024-10-15 08:27:12.397085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62968 len:8 PRP1 0x0 PRP2 0x0 00:16:25.578 [2024-10-15 08:27:12.397104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.578 [2024-10-15 08:27:12.397129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.578 [2024-10-15 08:27:12.397143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.578 [2024-10-15 08:27:12.397154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62976 len:8 PRP1 0x0 PRP2 0x0 00:16:25.578 [2024-10-15 08:27:12.397168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.578 [2024-10-15 08:27:12.397183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.578 [2024-10-15 08:27:12.397195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.578 [2024-10-15 08:27:12.397206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62984 len:8 PRP1 0x0 PRP2 0x0 00:16:25.578 [2024-10-15 08:27:12.397220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.578 [2024-10-15 08:27:12.397242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.578 [2024-10-15 08:27:12.397254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.578 [2024-10-15 08:27:12.397265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62992 len:8 PRP1 0x0 PRP2 0x0 00:16:25.578 [2024-10-15 08:27:12.397279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.578 [2024-10-15 08:27:12.397299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.578 [2024-10-15 08:27:12.397310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.578 [2024-10-15 08:27:12.397321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63000 len:8 PRP1 0x0 PRP2 0x0 00:16:25.578 [2024-10-15 08:27:12.397335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.578 [2024-10-15 08:27:12.397350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.578 [2024-10-15 08:27:12.397361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.578 [2024-10-15 08:27:12.397372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63008 len:8 PRP1 0x0 PRP2 0x0 00:16:25.578 [2024-10-15 08:27:12.397386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.578 [2024-10-15 08:27:12.397400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.578 [2024-10-15 08:27:12.397412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.578 [2024-10-15 08:27:12.397423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63016 len:8 PRP1 0x0 PRP2 0x0 00:16:25.578 [2024-10-15 08:27:12.397443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.578 [2024-10-15 08:27:12.397458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.578 [2024-10-15 08:27:12.397469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.578 [2024-10-15 08:27:12.397480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63024 len:8 PRP1 0x0 PRP2 0x0 00:16:25.578 [2024-10-15 08:27:12.397494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.578 [2024-10-15 08:27:12.397509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.578 [2024-10-15 08:27:12.397520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.578 [2024-10-15 08:27:12.397531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63032 len:8 PRP1 0x0 PRP2 0x0 00:16:25.578 [2024-10-15 08:27:12.397551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.578 [2024-10-15 08:27:12.397566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.578 [2024-10-15 08:27:12.397577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.578 [2024-10-15 08:27:12.397588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63040 len:8 PRP1 0x0 PRP2 0x0 00:16:25.578 [2024-10-15 08:27:12.397602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.578 [2024-10-15 08:27:12.397616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.578 [2024-10-15 08:27:12.397628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.578 [2024-10-15 08:27:12.397646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63048 len:8 PRP1 0x0 PRP2 0x0 00:16:25.578 [2024-10-15 08:27:12.397661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.578 [2024-10-15 08:27:12.397675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.578 [2024-10-15 08:27:12.397687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.578 [2024-10-15 08:27:12.397698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63056 len:8 PRP1 0x0 PRP2 0x0 00:16:25.578 [2024-10-15 08:27:12.397712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.578 [2024-10-15 08:27:12.397726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.578 [2024-10-15 08:27:12.397737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.578 [2024-10-15 08:27:12.397748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63064 len:8 PRP1 0x0 PRP2 0x0 00:16:25.579 [2024-10-15 08:27:12.397762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.579 [2024-10-15 08:27:12.397777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.579 [2024-10-15 08:27:12.397788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.579 [2024-10-15 08:27:12.397798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63072 len:8 PRP1 0x0 PRP2 0x0 00:16:25.579 [2024-10-15 08:27:12.397812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.579 [2024-10-15 08:27:12.397827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.579 [2024-10-15 08:27:12.397838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.579 [2024-10-15 08:27:12.397849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63080 len:8 PRP1 0x0 PRP2 0x0 00:16:25.579 [2024-10-15 08:27:12.397870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.579 [2024-10-15 08:27:12.397885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.579 [2024-10-15 08:27:12.397896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.579 [2024-10-15 08:27:12.397907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63088 len:8 PRP1 0x0 PRP2 0x0 00:16:25.579 [2024-10-15 08:27:12.397922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.579 [2024-10-15 08:27:12.397936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.579 [2024-10-15 08:27:12.397947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.579 [2024-10-15 08:27:12.397958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63096 len:8 PRP1 0x0 PRP2 0x0 00:16:25.579 [2024-10-15 08:27:12.397978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.579 [2024-10-15 08:27:12.397992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.579 [2024-10-15 08:27:12.398003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.579 [2024-10-15 08:27:12.398014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63104 len:8 PRP1 0x0 PRP2 0x0 00:16:25.579 [2024-10-15 08:27:12.398028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.579 [2024-10-15 08:27:12.398043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.579 [2024-10-15 08:27:12.398062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.579 [2024-10-15 08:27:12.398074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63112 len:8 PRP1 0x0 PRP2 0x0 00:16:25.579 [2024-10-15 08:27:12.398088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.579 [2024-10-15 08:27:12.398113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.579 [2024-10-15 08:27:12.398141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.579 [2024-10-15 08:27:12.398153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63120 len:8 PRP1 0x0 PRP2 0x0 00:16:25.579 [2024-10-15 08:27:12.398167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.579 [2024-10-15 08:27:12.398182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.579 [2024-10-15 08:27:12.398193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.579 [2024-10-15 08:27:12.398204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63128 len:8 PRP1 0x0 PRP2 0x0 00:16:25.579 [2024-10-15 08:27:12.398218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.579 [2024-10-15 08:27:12.398232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.579 [2024-10-15 08:27:12.398243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.579 [2024-10-15 08:27:12.398254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63136 len:8 PRP1 0x0 PRP2 0x0 00:16:25.579 [2024-10-15 08:27:12.398268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.579 [2024-10-15 08:27:12.398282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.579 [2024-10-15 08:27:12.398294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.579 [2024-10-15 08:27:12.398305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63144 len:8 PRP1 0x0 PRP2 0x0 00:16:25.579 [2024-10-15 08:27:12.398324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.579 [2024-10-15 08:27:12.398339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.579 [2024-10-15 08:27:12.398355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.579 [2024-10-15 08:27:12.398366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63152 len:8 PRP1 0x0 PRP2 0x0 00:16:25.579 [2024-10-15 08:27:12.398379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.579 [2024-10-15 08:27:12.398393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.579 [2024-10-15 08:27:12.398404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.579 [2024-10-15 08:27:12.398415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63160 len:8 PRP1 0x0 PRP2 0x0 00:16:25.579 [2024-10-15 08:27:12.398435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.579 [2024-10-15 08:27:12.398449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.579 [2024-10-15 08:27:12.398460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.579 [2024-10-15 08:27:12.398471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63168 len:8 PRP1 0x0 PRP2 0x0 00:16:25.579 [2024-10-15 08:27:12.398486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.579 [2024-10-15 08:27:12.398508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.579 [2024-10-15 08:27:12.398520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.579 [2024-10-15 08:27:12.398531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63176 len:8 PRP1 0x0 PRP2 0x0 00:16:25.579 [2024-10-15 08:27:12.398545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.579 [2024-10-15 08:27:12.398560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.579 [2024-10-15 08:27:12.398571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.579 [2024-10-15 08:27:12.398582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63184 len:8 PRP1 0x0 PRP2 0x0 00:16:25.579 [2024-10-15 08:27:12.398596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.579 [2024-10-15 08:27:12.398610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.579 [2024-10-15 08:27:12.398621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.579 [2024-10-15 08:27:12.398632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63192 len:8 PRP1 0x0 PRP2 0x0 00:16:25.579 [2024-10-15 08:27:12.398646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.579 [2024-10-15 08:27:12.398660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.579 [2024-10-15 08:27:12.398671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.579 [2024-10-15 08:27:12.398682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63200 len:8 PRP1 0x0 PRP2 0x0 00:16:25.579 [2024-10-15 08:27:12.398696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.579 [2024-10-15 08:27:12.398710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.579 [2024-10-15 08:27:12.398721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.579 [2024-10-15 08:27:12.398732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63208 len:8 PRP1 0x0 PRP2 0x0 00:16:25.579 [2024-10-15 08:27:12.398752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.579 [2024-10-15 08:27:12.398767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.579 [2024-10-15 08:27:12.398778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.580 [2024-10-15 08:27:12.398789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63216 len:8 PRP1 0x0 PRP2 0x0 00:16:25.580 [2024-10-15 08:27:12.398803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.580 [2024-10-15 08:27:12.398817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.580 [2024-10-15 08:27:12.398828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.580 [2024-10-15 08:27:12.398839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63224 len:8 PRP1 0x0 PRP2 0x0 00:16:25.580 [2024-10-15 08:27:12.398859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.580 [2024-10-15 08:27:12.398873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.580 [2024-10-15 08:27:12.398885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.580 [2024-10-15 08:27:12.398896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63232 len:8 PRP1 0x0 PRP2 0x0 00:16:25.580 [2024-10-15 08:27:12.398918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.580 [2024-10-15 08:27:12.398933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.580 [2024-10-15 08:27:12.398944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.580 [2024-10-15 08:27:12.398955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63240 len:8 PRP1 0x0 PRP2 0x0 00:16:25.580 [2024-10-15 08:27:12.398969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.580 [2024-10-15 08:27:12.398984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.580 [2024-10-15 08:27:12.398994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.580 [2024-10-15 08:27:12.399006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63248 len:8 PRP1 0x0 PRP2 0x0 00:16:25.580 [2024-10-15 08:27:12.399020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.580 [2024-10-15 08:27:12.399034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.580 [2024-10-15 08:27:12.399045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.580 [2024-10-15 08:27:12.399056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63256 len:8 PRP1 0x0 PRP2 0x0 00:16:25.580 [2024-10-15 08:27:12.399071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.580 [2024-10-15 08:27:12.399085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.580 [2024-10-15 08:27:12.399096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.580 [2024-10-15 08:27:12.399107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63264 len:8 PRP1 0x0 PRP2 0x0 00:16:25.580 [2024-10-15 08:27:12.399132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.580 [2024-10-15 08:27:12.399148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.580 [2024-10-15 08:27:12.399159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.580 [2024-10-15 08:27:12.399170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63272 len:8 PRP1 0x0 PRP2 0x0 00:16:25.580 [2024-10-15 08:27:12.399190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.580 [2024-10-15 08:27:12.399205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.580 [2024-10-15 08:27:12.399216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.580 [2024-10-15 08:27:12.399227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63280 len:8 PRP1 0x0 PRP2 0x0 00:16:25.580 [2024-10-15 08:27:12.399241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.580 [2024-10-15 08:27:12.399255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.580 [2024-10-15 08:27:12.399266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.580 [2024-10-15 08:27:12.399277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63288 len:8 PRP1 0x0 PRP2 0x0 00:16:25.580 [2024-10-15 08:27:12.399292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.580 [2024-10-15 08:27:12.399306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.580 [2024-10-15 08:27:12.399317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.580 [2024-10-15 08:27:12.399335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63296 len:8 PRP1 0x0 PRP2 0x0 00:16:25.580 [2024-10-15 08:27:12.399350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.580 [2024-10-15 08:27:12.399364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.580 [2024-10-15 08:27:12.399375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.580 [2024-10-15 08:27:12.399386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63304 len:8 PRP1 0x0 PRP2 0x0 00:16:25.580 [2024-10-15 08:27:12.399400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.580 [2024-10-15 08:27:12.399415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.580 [2024-10-15 08:27:12.399427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.580 [2024-10-15 08:27:12.399438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63312 len:8 PRP1 0x0 PRP2 0x0 00:16:25.580 [2024-10-15 08:27:12.399453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.580 [2024-10-15 08:27:12.399467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.580 [2024-10-15 08:27:12.399479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.580 [2024-10-15 08:27:12.399489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63320 len:8 PRP1 0x0 PRP2 0x0 00:16:25.580 [2024-10-15 08:27:12.399503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.580 [2024-10-15 08:27:12.399518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.580 [2024-10-15 08:27:12.399529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.580 [2024-10-15 08:27:12.399540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63328 len:8 PRP1 0x0 PRP2 0x0 00:16:25.580 [2024-10-15 08:27:12.399554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.580 [2024-10-15 08:27:12.399568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.580 [2024-10-15 08:27:12.399579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.580 [2024-10-15 08:27:12.399590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63336 len:8 PRP1 0x0 PRP2 0x0 00:16:25.580 [2024-10-15 08:27:12.399610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.580 [2024-10-15 08:27:12.399625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.580 [2024-10-15 08:27:12.399636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.580 [2024-10-15 08:27:12.399648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63344 len:8 PRP1 0x0 PRP2 0x0 00:16:25.580 [2024-10-15 08:27:12.399663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.580 [2024-10-15 08:27:12.399678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.580 [2024-10-15 08:27:12.399689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.580 [2024-10-15 08:27:12.399700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63352 len:8 PRP1 0x0 PRP2 0x0 00:16:25.580 [2024-10-15 08:27:12.399714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.581 [2024-10-15 08:27:12.399736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.581 [2024-10-15 08:27:12.399748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.581 [2024-10-15 08:27:12.399759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63360 len:8 PRP1 0x0 PRP2 0x0 00:16:25.581 [2024-10-15 08:27:12.399773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.581 [2024-10-15 08:27:12.412094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.581 [2024-10-15 08:27:12.412150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.581 [2024-10-15 08:27:12.412171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63368 len:8 PRP1 0x0 PRP2 0x0 00:16:25.581 [2024-10-15 08:27:12.412194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.581 [2024-10-15 08:27:12.412215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.581 [2024-10-15 08:27:12.412230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.581 [2024-10-15 08:27:12.412258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63376 len:8 PRP1 0x0 PRP2 0x0 00:16:25.581 [2024-10-15 08:27:12.412278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.581 [2024-10-15 08:27:12.412299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.581 [2024-10-15 08:27:12.412314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.581 [2024-10-15 08:27:12.412329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63384 len:8 PRP1 0x0 PRP2 0x0 00:16:25.581 [2024-10-15 08:27:12.412350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.581 [2024-10-15 08:27:12.412370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.581 [2024-10-15 08:27:12.412385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.581 [2024-10-15 08:27:12.412401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63392 len:8 PRP1 0x0 PRP2 0x0 00:16:25.581 [2024-10-15 08:27:12.412421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.581 [2024-10-15 08:27:12.412441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.581 [2024-10-15 08:27:12.412456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.581 [2024-10-15 08:27:12.412472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63400 len:8 PRP1 0x0 PRP2 0x0 00:16:25.581 [2024-10-15 08:27:12.412493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.581 [2024-10-15 08:27:12.412513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.581 [2024-10-15 08:27:12.412529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.581 [2024-10-15 08:27:12.412544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63408 len:8 PRP1 0x0 PRP2 0x0 00:16:25.581 [2024-10-15 08:27:12.412564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.581 [2024-10-15 08:27:12.412584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.581 [2024-10-15 08:27:12.412599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.581 [2024-10-15 08:27:12.412614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63416 len:8 PRP1 0x0 PRP2 0x0 00:16:25.581 [2024-10-15 08:27:12.412634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.581 [2024-10-15 08:27:12.412674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.581 [2024-10-15 08:27:12.412690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.581 [2024-10-15 08:27:12.412706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63424 len:8 PRP1 0x0 PRP2 0x0 00:16:25.581 [2024-10-15 08:27:12.412725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.581 [2024-10-15 08:27:12.412746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.581 [2024-10-15 08:27:12.412761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.581 [2024-10-15 08:27:12.412775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63432 len:8 PRP1 0x0 PRP2 0x0 00:16:25.581 [2024-10-15 08:27:12.412795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.581 [2024-10-15 08:27:12.412815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.581 [2024-10-15 08:27:12.412830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.581 [2024-10-15 08:27:12.412845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63440 len:8 PRP1 0x0 PRP2 0x0 00:16:25.581 [2024-10-15 08:27:12.412865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.581 [2024-10-15 08:27:12.412885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.581 [2024-10-15 08:27:12.412900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.581 [2024-10-15 08:27:12.412915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63448 len:8 PRP1 0x0 PRP2 0x0 00:16:25.581 [2024-10-15 08:27:12.412934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.581 [2024-10-15 08:27:12.412954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.581 [2024-10-15 08:27:12.412969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.581 [2024-10-15 08:27:12.412984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63456 len:8 PRP1 0x0 PRP2 0x0 00:16:25.581 [2024-10-15 08:27:12.413004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.581 [2024-10-15 08:27:12.413024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.581 [2024-10-15 08:27:12.413038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.582 [2024-10-15 08:27:12.413053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63464 len:8 PRP1 0x0 PRP2 0x0 00:16:25.582 [2024-10-15 08:27:12.413073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.582 [2024-10-15 08:27:12.413093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.582 [2024-10-15 08:27:12.413108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.582 [2024-10-15 08:27:12.413124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63472 len:8 PRP1 0x0 PRP2 0x0 00:16:25.582 [2024-10-15 08:27:12.413160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.582 [2024-10-15 08:27:12.413181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.582 [2024-10-15 08:27:12.413196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.582 [2024-10-15 08:27:12.413223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63480 len:8 PRP1 0x0 PRP2 0x0 00:16:25.582 [2024-10-15 08:27:12.413244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.582 [2024-10-15 08:27:12.413265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.582 [2024-10-15 08:27:12.413280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.582 [2024-10-15 08:27:12.413295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63488 len:8 PRP1 0x0 PRP2 0x0 00:16:25.582 [2024-10-15 08:27:12.413315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.582 [2024-10-15 08:27:12.413335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.582 [2024-10-15 08:27:12.413350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.582 [2024-10-15 08:27:12.413365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63496 len:8 PRP1 0x0 PRP2 0x0 00:16:25.582 [2024-10-15 08:27:12.413384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.582 [2024-10-15 08:27:12.413404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.582 [2024-10-15 08:27:12.413419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.582 [2024-10-15 08:27:12.413434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63504 len:8 PRP1 0x0 PRP2 0x0 00:16:25.582 [2024-10-15 08:27:12.413454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.582 [2024-10-15 08:27:12.413473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.582 [2024-10-15 08:27:12.413488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.582 [2024-10-15 08:27:12.413503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63512 len:8 PRP1 0x0 PRP2 0x0 00:16:25.582 [2024-10-15 08:27:12.413523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.582 [2024-10-15 08:27:12.413543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.582 [2024-10-15 08:27:12.413558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.582 [2024-10-15 08:27:12.413572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63520 len:8 PRP1 0x0 PRP2 0x0 00:16:25.582 [2024-10-15 08:27:12.413592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.582 [2024-10-15 08:27:12.413612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.582 [2024-10-15 08:27:12.413628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.582 [2024-10-15 08:27:12.413643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63528 len:8 PRP1 0x0 PRP2 0x0 00:16:25.582 [2024-10-15 08:27:12.413662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.582 [2024-10-15 08:27:12.413682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.582 [2024-10-15 08:27:12.413697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.582 [2024-10-15 08:27:12.413713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63536 len:8 PRP1 0x0 PRP2 0x0 00:16:25.582 [2024-10-15 08:27:12.413732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.582 [2024-10-15 08:27:12.413752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.582 [2024-10-15 08:27:12.413777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.582 [2024-10-15 08:27:12.413793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63544 len:8 PRP1 0x0 PRP2 0x0 00:16:25.582 [2024-10-15 08:27:12.413813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.582 [2024-10-15 08:27:12.413833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.582 [2024-10-15 08:27:12.413848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.582 [2024-10-15 08:27:12.413863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63552 len:8 PRP1 0x0 PRP2 0x0 00:16:25.582 [2024-10-15 08:27:12.413883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.582 [2024-10-15 08:27:12.413903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.582 [2024-10-15 08:27:12.413918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.582 [2024-10-15 08:27:12.413933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63560 len:8 PRP1 0x0 PRP2 0x0 00:16:25.582 [2024-10-15 08:27:12.413952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.582 [2024-10-15 08:27:12.413972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.582 [2024-10-15 08:27:12.413987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.582 [2024-10-15 08:27:12.414002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63568 len:8 PRP1 0x0 PRP2 0x0 00:16:25.582 [2024-10-15 08:27:12.414022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.582 [2024-10-15 08:27:12.414042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.582 [2024-10-15 08:27:12.414057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.582 [2024-10-15 08:27:12.414072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63576 len:8 PRP1 0x0 PRP2 0x0 00:16:25.582 [2024-10-15 08:27:12.414091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.582 [2024-10-15 08:27:12.414146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.582 [2024-10-15 08:27:12.414165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.582 [2024-10-15 08:27:12.414180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63584 len:8 PRP1 0x0 PRP2 0x0 00:16:25.582 [2024-10-15 08:27:12.414200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.582 [2024-10-15 08:27:12.414220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.582 [2024-10-15 08:27:12.414236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.582 [2024-10-15 08:27:12.414251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63592 len:8 PRP1 0x0 PRP2 0x0 00:16:25.582 [2024-10-15 08:27:12.414272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.582 [2024-10-15 08:27:12.414292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.582 [2024-10-15 08:27:12.414306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.582 [2024-10-15 08:27:12.414321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63600 len:8 PRP1 0x0 PRP2 0x0 00:16:25.582 [2024-10-15 08:27:12.414341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.582 [2024-10-15 08:27:12.414372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.582 [2024-10-15 08:27:12.414388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.583 [2024-10-15 08:27:12.414403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63608 len:8 PRP1 0x0 PRP2 0x0 00:16:25.583 [2024-10-15 08:27:12.414423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:12.414443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.583 [2024-10-15 08:27:12.414458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.583 [2024-10-15 08:27:12.414473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63616 len:8 PRP1 0x0 PRP2 0x0 00:16:25.583 [2024-10-15 08:27:12.414492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:12.414512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.583 [2024-10-15 08:27:12.414527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.583 [2024-10-15 08:27:12.414542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63624 len:8 PRP1 0x0 PRP2 0x0 00:16:25.583 [2024-10-15 08:27:12.414562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:12.414582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.583 [2024-10-15 08:27:12.414597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.583 [2024-10-15 08:27:12.414612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62632 len:8 PRP1 0x0 PRP2 0x0 00:16:25.583 [2024-10-15 08:27:12.414631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:12.414652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.583 [2024-10-15 08:27:12.414667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.583 [2024-10-15 08:27:12.414682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62640 len:8 PRP1 0x0 PRP2 0x0 00:16:25.583 [2024-10-15 08:27:12.414701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:12.414721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.583 [2024-10-15 08:27:12.414736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.583 [2024-10-15 08:27:12.414751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62648 len:8 PRP1 0x0 PRP2 0x0 00:16:25.583 [2024-10-15 08:27:12.414770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:12.414790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.583 [2024-10-15 08:27:12.414805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.583 [2024-10-15 08:27:12.414821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62656 len:8 PRP1 0x0 PRP2 0x0 00:16:25.583 [2024-10-15 08:27:12.414841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:12.414861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.583 [2024-10-15 08:27:12.414876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.583 [2024-10-15 08:27:12.414891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62664 len:8 PRP1 0x0 PRP2 0x0 00:16:25.583 [2024-10-15 08:27:12.414919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:12.414940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.583 [2024-10-15 08:27:12.414955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.583 [2024-10-15 08:27:12.414970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62672 len:8 PRP1 0x0 PRP2 0x0 00:16:25.583 [2024-10-15 08:27:12.414990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:12.415009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.583 [2024-10-15 08:27:12.415025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.583 [2024-10-15 08:27:12.415040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62680 len:8 PRP1 0x0 PRP2 0x0 00:16:25.583 [2024-10-15 08:27:12.415059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:12.415161] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x644010 was disconnected and freed. reset controller. 00:16:25.583 [2024-10-15 08:27:12.415191] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:16:25.583 [2024-10-15 08:27:12.415283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.583 [2024-10-15 08:27:12.415315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:12.415338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.583 [2024-10-15 08:27:12.415358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:12.415379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.583 [2024-10-15 08:27:12.415399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:12.415419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.583 [2024-10-15 08:27:12.415438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:12.415458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:25.583 [2024-10-15 08:27:12.415546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d52e0 (9): Bad file descriptor 00:16:25.583 [2024-10-15 08:27:12.421134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:25.583 [2024-10-15 08:27:12.458497] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:25.583 7612.50 IOPS, 29.74 MiB/s [2024-10-15T08:27:27.314Z] 8133.00 IOPS, 31.77 MiB/s [2024-10-15T08:27:27.314Z] 8403.75 IOPS, 32.83 MiB/s [2024-10-15T08:27:27.314Z] [2024-10-15 08:27:16.114723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.583 [2024-10-15 08:27:16.114819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:16.114854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.583 [2024-10-15 08:27:16.114872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:16.114925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.583 [2024-10-15 08:27:16.114942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:16.114960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.583 [2024-10-15 08:27:16.114975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:16.114992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.583 [2024-10-15 08:27:16.115007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:16.115023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.583 [2024-10-15 08:27:16.115038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:16.115055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.583 [2024-10-15 08:27:16.115070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:16.115086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.583 [2024-10-15 08:27:16.115101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:16.115131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.583 [2024-10-15 08:27:16.115149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:16.115166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.583 [2024-10-15 08:27:16.115181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:16.115197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.583 [2024-10-15 08:27:16.115212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:16.115228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.583 [2024-10-15 08:27:16.115243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:16.115259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.583 [2024-10-15 08:27:16.115274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:16.115290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.583 [2024-10-15 08:27:16.115305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.583 [2024-10-15 08:27:16.115321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.583 [2024-10-15 08:27:16.115336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.115361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.584 [2024-10-15 08:27:16.115377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.115394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.115409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.115436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.115452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.115468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.115483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.115499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.115515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.115531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.115545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.115561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.115576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.115593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.115607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.115624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.115639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.115655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.115670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.115686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.115701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.115717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.115732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.115748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.115771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.115788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.115803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.115819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.115834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.115850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.115865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.115881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.115896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.115912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.584 [2024-10-15 08:27:16.115927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.115944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.584 [2024-10-15 08:27:16.115959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.115975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.584 [2024-10-15 08:27:16.115990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.116007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.584 [2024-10-15 08:27:16.116022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.116038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.584 [2024-10-15 08:27:16.116052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.116069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.584 [2024-10-15 08:27:16.116083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.116099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.584 [2024-10-15 08:27:16.116126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.116145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.584 [2024-10-15 08:27:16.116160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.116185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.116201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.116218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.116233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.116249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.116264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.116280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.116295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.116311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.116326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.116342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.116357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.116372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.116388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.116404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.116419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.116435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.116460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.116478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.116493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.116510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.116525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.116541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.116556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.116572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.584 [2024-10-15 08:27:16.116594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.584 [2024-10-15 08:27:16.116612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.585 [2024-10-15 08:27:16.116627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.116644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.585 [2024-10-15 08:27:16.116659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.116675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.585 [2024-10-15 08:27:16.116690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.116706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.585 [2024-10-15 08:27:16.116722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.116738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.585 [2024-10-15 08:27:16.116752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.116769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.585 [2024-10-15 08:27:16.116784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.116800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.585 [2024-10-15 08:27:16.116815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.116831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.585 [2024-10-15 08:27:16.116846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.116863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.585 [2024-10-15 08:27:16.116878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.116894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.585 [2024-10-15 08:27:16.116910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.116927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.585 [2024-10-15 08:27:16.116942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.116958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.585 [2024-10-15 08:27:16.116979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.117002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.585 [2024-10-15 08:27:16.117019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.117036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.585 [2024-10-15 08:27:16.117051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.117067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.585 [2024-10-15 08:27:16.117082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.117098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.585 [2024-10-15 08:27:16.117125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.117156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.585 [2024-10-15 08:27:16.117172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.117188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.585 [2024-10-15 08:27:16.117204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.117220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.585 [2024-10-15 08:27:16.117235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.117252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.585 [2024-10-15 08:27:16.117267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.117283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.585 [2024-10-15 08:27:16.117298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.117314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.585 [2024-10-15 08:27:16.117329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.117345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.585 [2024-10-15 08:27:16.117360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.117376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.585 [2024-10-15 08:27:16.117392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.117408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.585 [2024-10-15 08:27:16.117422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.117451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.585 [2024-10-15 08:27:16.117467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.117484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.585 [2024-10-15 08:27:16.117499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.117515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.585 [2024-10-15 08:27:16.117530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.117546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.585 [2024-10-15 08:27:16.117561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.585 [2024-10-15 08:27:16.117577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.585 [2024-10-15 08:27:16.117592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.117608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.586 [2024-10-15 08:27:16.117622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.117638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.586 [2024-10-15 08:27:16.117653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.117674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.586 [2024-10-15 08:27:16.117690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.117706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.586 [2024-10-15 08:27:16.117721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.117736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.586 [2024-10-15 08:27:16.117751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.117767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.586 [2024-10-15 08:27:16.117782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.117798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.586 [2024-10-15 08:27:16.117813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.117828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.586 [2024-10-15 08:27:16.117850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.117866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.586 [2024-10-15 08:27:16.117882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.117898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.586 [2024-10-15 08:27:16.117912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.117928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.586 [2024-10-15 08:27:16.117943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.117959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.586 [2024-10-15 08:27:16.117974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.117989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.586 [2024-10-15 08:27:16.118004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.118020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.586 [2024-10-15 08:27:16.118035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.118051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.586 [2024-10-15 08:27:16.118066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.118082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.586 [2024-10-15 08:27:16.118097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.118136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.586 [2024-10-15 08:27:16.118155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.118172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.586 [2024-10-15 08:27:16.118187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.118210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.586 [2024-10-15 08:27:16.118226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.118242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.586 [2024-10-15 08:27:16.118269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.118294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.586 [2024-10-15 08:27:16.118309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.118325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.586 [2024-10-15 08:27:16.118341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.118357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.586 [2024-10-15 08:27:16.118372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.118388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.586 [2024-10-15 08:27:16.118403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.118419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.586 [2024-10-15 08:27:16.118434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.118450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.586 [2024-10-15 08:27:16.118465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.118481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.586 [2024-10-15 08:27:16.118495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.118511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.586 [2024-10-15 08:27:16.118527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.118543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.586 [2024-10-15 08:27:16.118557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.118574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.586 [2024-10-15 08:27:16.118589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.118605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.586 [2024-10-15 08:27:16.118620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.118636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.586 [2024-10-15 08:27:16.118651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.586 [2024-10-15 08:27:16.118668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.587 [2024-10-15 08:27:16.118689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.587 [2024-10-15 08:27:16.118734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.587 [2024-10-15 08:27:16.118750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.587 [2024-10-15 08:27:16.118774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.587 [2024-10-15 08:27:16.118790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.587 [2024-10-15 08:27:16.118806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.587 [2024-10-15 08:27:16.118822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.587 [2024-10-15 08:27:16.118838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6481d0 is same with the state(6) to be set 00:16:25.587 [2024-10-15 08:27:16.118857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.587 [2024-10-15 08:27:16.118868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.587 [2024-10-15 08:27:16.118880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78448 len:8 PRP1 0x0 PRP2 0x0 00:16:25.587 [2024-10-15 08:27:16.118895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.587 [2024-10-15 08:27:16.118910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.587 [2024-10-15 08:27:16.118921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.587 [2024-10-15 08:27:16.118932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78904 len:8 PRP1 0x0 PRP2 0x0 00:16:25.587 [2024-10-15 08:27:16.118946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.587 [2024-10-15 08:27:16.118960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.587 [2024-10-15 08:27:16.118971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.587 [2024-10-15 08:27:16.118982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78912 len:8 PRP1 0x0 PRP2 0x0 00:16:25.587 [2024-10-15 08:27:16.118996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.587 [2024-10-15 08:27:16.119010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.587 [2024-10-15 08:27:16.119021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.587 [2024-10-15 08:27:16.119032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78920 len:8 PRP1 0x0 PRP2 0x0 00:16:25.587 [2024-10-15 08:27:16.119046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.587 [2024-10-15 08:27:16.119060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.587 [2024-10-15 08:27:16.119072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.587 [2024-10-15 08:27:16.119083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78928 len:8 PRP1 0x0 PRP2 0x0 00:16:25.587 [2024-10-15 08:27:16.119097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.587 [2024-10-15 08:27:16.119111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.587 [2024-10-15 08:27:16.119136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.587 [2024-10-15 08:27:16.119156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78936 len:8 PRP1 0x0 PRP2 0x0 00:16:25.587 [2024-10-15 08:27:16.119171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.587 [2024-10-15 08:27:16.119186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.587 [2024-10-15 08:27:16.119198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.587 [2024-10-15 08:27:16.119208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78944 len:8 PRP1 0x0 PRP2 0x0 00:16:25.587 [2024-10-15 08:27:16.119228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.587 [2024-10-15 08:27:16.119243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.587 [2024-10-15 08:27:16.119254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.587 [2024-10-15 08:27:16.119266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78952 len:8 PRP1 0x0 PRP2 0x0 00:16:25.587 [2024-10-15 08:27:16.119280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.587 [2024-10-15 08:27:16.119294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.587 [2024-10-15 08:27:16.119305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.587 [2024-10-15 08:27:16.119316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78960 len:8 PRP1 0x0 PRP2 0x0 00:16:25.587 [2024-10-15 08:27:16.119330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.587 [2024-10-15 08:27:16.119398] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6481d0 was disconnected and freed. reset controller. 00:16:25.587 [2024-10-15 08:27:16.119418] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:16:25.587 [2024-10-15 08:27:16.119480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.587 [2024-10-15 08:27:16.119502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.587 [2024-10-15 08:27:16.119519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.587 [2024-10-15 08:27:16.119533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.587 [2024-10-15 08:27:16.119548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.587 [2024-10-15 08:27:16.119562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.587 [2024-10-15 08:27:16.119577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.587 [2024-10-15 08:27:16.119591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.587 [2024-10-15 08:27:16.119606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:25.587 [2024-10-15 08:27:16.119661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d52e0 (9): Bad file descriptor 00:16:25.587 [2024-10-15 08:27:16.123484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:25.587 [2024-10-15 08:27:16.163559] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:25.587 8462.20 IOPS, 33.06 MiB/s [2024-10-15T08:27:27.318Z] 8581.17 IOPS, 33.52 MiB/s [2024-10-15T08:27:27.318Z] 8651.29 IOPS, 33.79 MiB/s [2024-10-15T08:27:27.318Z] 8715.38 IOPS, 34.04 MiB/s [2024-10-15T08:27:27.318Z] 8762.56 IOPS, 34.23 MiB/s [2024-10-15T08:27:27.318Z] [2024-10-15 08:27:20.678970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.587 [2024-10-15 08:27:20.679050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.588 [2024-10-15 08:27:20.679117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.588 [2024-10-15 08:27:20.679166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.588 [2024-10-15 08:27:20.679198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.588 [2024-10-15 08:27:20.679231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.588 [2024-10-15 08:27:20.679262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.588 [2024-10-15 08:27:20.679294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.588 [2024-10-15 08:27:20.679326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.588 [2024-10-15 08:27:20.679357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.588 [2024-10-15 08:27:20.679389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.588 [2024-10-15 08:27:20.679421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.588 [2024-10-15 08:27:20.679452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.588 [2024-10-15 08:27:20.679515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.588 [2024-10-15 08:27:20.679563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.588 [2024-10-15 08:27:20.679593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.588 [2024-10-15 08:27:20.679623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.588 [2024-10-15 08:27:20.679653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.588 [2024-10-15 08:27:20.679689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.588 [2024-10-15 08:27:20.679720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.588 [2024-10-15 08:27:20.679751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.588 [2024-10-15 08:27:20.679781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.588 [2024-10-15 08:27:20.679812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.588 [2024-10-15 08:27:20.679860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.588 [2024-10-15 08:27:20.679891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.588 [2024-10-15 08:27:20.679923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.588 [2024-10-15 08:27:20.679965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.679981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.588 [2024-10-15 08:27:20.679997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.680013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.588 [2024-10-15 08:27:20.680028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.680045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.588 [2024-10-15 08:27:20.680060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.680076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.588 [2024-10-15 08:27:20.680091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.680107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.588 [2024-10-15 08:27:20.680122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.680138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.588 [2024-10-15 08:27:20.680166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.680183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.588 [2024-10-15 08:27:20.680199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.680217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.588 [2024-10-15 08:27:20.680233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.680249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.588 [2024-10-15 08:27:20.680264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.680281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.588 [2024-10-15 08:27:20.680296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.588 [2024-10-15 08:27:20.680312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.589 [2024-10-15 08:27:20.680327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.680343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.589 [2024-10-15 08:27:20.680366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.680384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.589 [2024-10-15 08:27:20.680400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.680416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.589 [2024-10-15 08:27:20.680431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.680447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.589 [2024-10-15 08:27:20.680463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.680479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.589 [2024-10-15 08:27:20.680494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.680511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.589 [2024-10-15 08:27:20.680526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.680542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.589 [2024-10-15 08:27:20.680557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.680573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.589 [2024-10-15 08:27:20.680588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.680605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.589 [2024-10-15 08:27:20.680619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.680636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.589 [2024-10-15 08:27:20.680651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.680667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.589 [2024-10-15 08:27:20.680682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.680698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.589 [2024-10-15 08:27:20.680713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.680730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.589 [2024-10-15 08:27:20.680745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.680761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.589 [2024-10-15 08:27:20.680783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.680801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.589 [2024-10-15 08:27:20.680816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.680833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.589 [2024-10-15 08:27:20.680848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.680864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.589 [2024-10-15 08:27:20.680878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.680895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.589 [2024-10-15 08:27:20.680910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.680926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.589 [2024-10-15 08:27:20.680941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.680957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.589 [2024-10-15 08:27:20.680973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.680990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.589 [2024-10-15 08:27:20.681005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.681021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.589 [2024-10-15 08:27:20.681036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.681052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.589 [2024-10-15 08:27:20.681067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.681083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.589 [2024-10-15 08:27:20.681098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.681124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.589 [2024-10-15 08:27:20.681142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.681158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.589 [2024-10-15 08:27:20.681174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.681199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.589 [2024-10-15 08:27:20.681214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.681231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.589 [2024-10-15 08:27:20.681246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.681263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.589 [2024-10-15 08:27:20.681279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.681295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.589 [2024-10-15 08:27:20.681310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.681326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.589 [2024-10-15 08:27:20.681341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.681357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.589 [2024-10-15 08:27:20.681372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.681388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.589 [2024-10-15 08:27:20.681404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.681420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.589 [2024-10-15 08:27:20.681435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.589 [2024-10-15 08:27:20.681451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.590 [2024-10-15 08:27:20.681466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.681482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.681498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.681515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.681530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.681547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.681562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.681578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.681600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.681617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.681633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.681649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.681664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.681680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.681695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.681712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.681727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.681744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.590 [2024-10-15 08:27:20.681758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.681783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.590 [2024-10-15 08:27:20.681799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.681815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.590 [2024-10-15 08:27:20.681830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.681847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.590 [2024-10-15 08:27:20.681862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.681878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.590 [2024-10-15 08:27:20.681893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.681909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.590 [2024-10-15 08:27:20.681924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.681940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.590 [2024-10-15 08:27:20.681955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.681972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.590 [2024-10-15 08:27:20.681987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.590 [2024-10-15 08:27:20.682027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.590 [2024-10-15 08:27:20.682058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.590 [2024-10-15 08:27:20.682090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.590 [2024-10-15 08:27:20.682143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.590 [2024-10-15 08:27:20.682178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.590 [2024-10-15 08:27:20.682210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.590 [2024-10-15 08:27:20.682242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.590 [2024-10-15 08:27:20.682273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.682304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.682344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.682375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.682407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.682446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.682479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.682514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.682545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.682580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.682611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.682643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.682674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.682715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.682747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.590 [2024-10-15 08:27:20.682779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.590 [2024-10-15 08:27:20.682795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6483b0 is same with the state(6) to be set 00:16:25.591 [2024-10-15 08:27:20.682814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.591 [2024-10-15 08:27:20.682826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.591 [2024-10-15 08:27:20.682837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25504 len:8 PRP1 0x0 PRP2 0x0 00:16:25.591 [2024-10-15 08:27:20.682857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.591 [2024-10-15 08:27:20.682873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.591 [2024-10-15 08:27:20.682887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.591 [2024-10-15 08:27:20.682903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26024 len:8 PRP1 0x0 PRP2 0x0 00:16:25.591 [2024-10-15 08:27:20.682918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.591 [2024-10-15 08:27:20.682933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.591 [2024-10-15 08:27:20.682944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.591 [2024-10-15 08:27:20.682955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26032 len:8 PRP1 0x0 PRP2 0x0 00:16:25.591 [2024-10-15 08:27:20.682969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.591 [2024-10-15 08:27:20.682984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.591 [2024-10-15 08:27:20.682994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.591 [2024-10-15 08:27:20.683005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26040 len:8 PRP1 0x0 PRP2 0x0 00:16:25.591 [2024-10-15 08:27:20.683019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.591 [2024-10-15 08:27:20.683033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.591 [2024-10-15 08:27:20.683044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.591 [2024-10-15 08:27:20.683055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26048 len:8 PRP1 0x0 PRP2 0x0 00:16:25.591 [2024-10-15 08:27:20.683069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.591 [2024-10-15 08:27:20.683083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.591 [2024-10-15 08:27:20.683094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.591 [2024-10-15 08:27:20.683105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26056 len:8 PRP1 0x0 PRP2 0x0 00:16:25.591 [2024-10-15 08:27:20.683131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.591 [2024-10-15 08:27:20.683148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.591 [2024-10-15 08:27:20.683159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.591 [2024-10-15 08:27:20.683176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26064 len:8 PRP1 0x0 PRP2 0x0 00:16:25.591 [2024-10-15 08:27:20.683191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.591 [2024-10-15 08:27:20.683205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.591 [2024-10-15 08:27:20.683216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.591 [2024-10-15 08:27:20.683227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26072 len:8 PRP1 0x0 PRP2 0x0 00:16:25.591 [2024-10-15 08:27:20.683241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.591 [2024-10-15 08:27:20.683255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.591 [2024-10-15 08:27:20.683266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.591 [2024-10-15 08:27:20.683277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26080 len:8 PRP1 0x0 PRP2 0x0 00:16:25.591 [2024-10-15 08:27:20.683296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.591 [2024-10-15 08:27:20.683322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.591 [2024-10-15 08:27:20.683334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.591 [2024-10-15 08:27:20.683346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25512 len:8 PRP1 0x0 PRP2 0x0 00:16:25.591 [2024-10-15 08:27:20.683361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.591 [2024-10-15 08:27:20.683375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.591 [2024-10-15 08:27:20.683386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.591 [2024-10-15 08:27:20.683396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25520 len:8 PRP1 0x0 PRP2 0x0 00:16:25.591 [2024-10-15 08:27:20.683411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.591 [2024-10-15 08:27:20.683424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.591 [2024-10-15 08:27:20.683435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.591 [2024-10-15 08:27:20.683446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25528 len:8 PRP1 0x0 PRP2 0x0 00:16:25.591 [2024-10-15 08:27:20.683461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.591 [2024-10-15 08:27:20.683475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.591 [2024-10-15 08:27:20.683486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.591 [2024-10-15 08:27:20.683497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25536 len:8 PRP1 0x0 PRP2 0x0 00:16:25.591 [2024-10-15 08:27:20.683511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.591 [2024-10-15 08:27:20.683525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.591 [2024-10-15 08:27:20.683536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.591 [2024-10-15 08:27:20.683547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25544 len:8 PRP1 0x0 PRP2 0x0 00:16:25.591 [2024-10-15 08:27:20.683561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.591 [2024-10-15 08:27:20.683575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.591 [2024-10-15 08:27:20.683586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.591 [2024-10-15 08:27:20.683603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25552 len:8 PRP1 0x0 PRP2 0x0 00:16:25.591 [2024-10-15 08:27:20.683618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.591 [2024-10-15 08:27:20.683632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.591 [2024-10-15 08:27:20.683651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.591 [2024-10-15 08:27:20.683662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25560 len:8 PRP1 0x0 PRP2 0x0 00:16:25.591 [2024-10-15 08:27:20.683676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.591 [2024-10-15 08:27:20.683690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.591 [2024-10-15 08:27:20.683701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.591 [2024-10-15 08:27:20.683712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25568 len:8 PRP1 0x0 PRP2 0x0 00:16:25.591 [2024-10-15 08:27:20.683739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.591 [2024-10-15 08:27:20.683809] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6483b0 was disconnected and freed. reset controller. 00:16:25.591 [2024-10-15 08:27:20.683830] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:16:25.591 [2024-10-15 08:27:20.683889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.591 [2024-10-15 08:27:20.683912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.591 [2024-10-15 08:27:20.683929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.591 [2024-10-15 08:27:20.683944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.591 [2024-10-15 08:27:20.683959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.591 [2024-10-15 08:27:20.683973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.591 [2024-10-15 08:27:20.683988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.591 [2024-10-15 08:27:20.684009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.591 [2024-10-15 08:27:20.684024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:25.591 [2024-10-15 08:27:20.684077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d52e0 (9): Bad file descriptor 00:16:25.591 [2024-10-15 08:27:20.687884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:25.591 [2024-10-15 08:27:20.721426] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:25.591 8771.00 IOPS, 34.26 MiB/s [2024-10-15T08:27:27.322Z] 8803.45 IOPS, 34.39 MiB/s [2024-10-15T08:27:27.322Z] 8837.83 IOPS, 34.52 MiB/s [2024-10-15T08:27:27.322Z] 8872.92 IOPS, 34.66 MiB/s [2024-10-15T08:27:27.322Z] 8898.14 IOPS, 34.76 MiB/s [2024-10-15T08:27:27.322Z] 8913.47 IOPS, 34.82 MiB/s 00:16:25.591 Latency(us) 00:16:25.591 [2024-10-15T08:27:27.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.591 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:25.591 Verification LBA range: start 0x0 length 0x4000 00:16:25.591 NVMe0n1 : 15.01 8915.28 34.83 222.58 0.00 13975.48 659.08 30980.65 00:16:25.591 [2024-10-15T08:27:27.322Z] =================================================================================================================== 00:16:25.591 [2024-10-15T08:27:27.322Z] Total : 8915.28 34.83 222.58 0.00 13975.48 659.08 30980.65 00:16:25.592 Received shutdown signal, test time was about 15.000000 seconds 00:16:25.592 00:16:25.592 Latency(us) 00:16:25.592 [2024-10-15T08:27:27.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.592 [2024-10-15T08:27:27.323Z] =================================================================================================================== 00:16:25.592 [2024-10-15T08:27:27.323Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:25.592 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:25.592 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:16:25.592 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:16:25.592 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75751 00:16:25.592 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75751 /var/tmp/bdevperf.sock 00:16:25.592 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:25.592 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75751 ']' 00:16:25.592 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:25.592 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:25.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:25.592 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:25.592 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:25.592 08:27:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:25.592 08:27:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:25.592 08:27:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:16:25.592 08:27:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:25.592 [2024-10-15 08:27:27.280173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:25.850 08:27:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:25.850 [2024-10-15 08:27:27.580373] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:16:26.108 08:27:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:26.366 NVMe0n1 00:16:26.366 08:27:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:26.624 00:16:26.624 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:27.190 00:16:27.190 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:16:27.190 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:27.190 08:27:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:27.449 08:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:16:30.736 08:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:30.736 08:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:16:30.995 08:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75826 00:16:30.995 08:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:30.995 08:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75826 00:16:31.931 { 00:16:31.931 "results": [ 00:16:31.931 { 00:16:31.931 "job": "NVMe0n1", 00:16:31.931 "core_mask": "0x1", 00:16:31.931 "workload": "verify", 00:16:31.931 "status": "finished", 00:16:31.931 "verify_range": { 00:16:31.931 "start": 0, 00:16:31.931 "length": 16384 00:16:31.931 }, 00:16:31.931 "queue_depth": 128, 00:16:31.931 "io_size": 4096, 00:16:31.931 "runtime": 1.008861, 00:16:31.931 "iops": 6872.106266373663, 00:16:31.931 "mibps": 26.84416510302212, 00:16:31.931 "io_failed": 0, 00:16:31.931 "io_timeout": 0, 00:16:31.931 "avg_latency_us": 18550.80295923318, 00:16:31.931 "min_latency_us": 2323.549090909091, 00:16:31.931 "max_latency_us": 15609.483636363637 00:16:31.931 } 00:16:31.931 ], 00:16:31.931 "core_count": 1 00:16:31.931 } 00:16:31.931 08:27:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:31.931 [2024-10-15 08:27:26.654488] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:16:31.931 [2024-10-15 08:27:26.654615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75751 ] 00:16:31.931 [2024-10-15 08:27:26.789680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.931 [2024-10-15 08:27:26.865733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.931 [2024-10-15 08:27:26.937541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:31.931 [2024-10-15 08:27:29.146387] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:16:31.931 [2024-10-15 08:27:29.146523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.931 [2024-10-15 08:27:29.146552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.931 [2024-10-15 08:27:29.146572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.931 [2024-10-15 08:27:29.146587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.931 [2024-10-15 08:27:29.146602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.931 [2024-10-15 08:27:29.146616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.931 [2024-10-15 08:27:29.146632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:31.931 [2024-10-15 08:27:29.146646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.931 [2024-10-15 08:27:29.146661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:31.931 [2024-10-15 08:27:29.146716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:31.931 [2024-10-15 08:27:29.146750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x82e2e0 (9): Bad file descriptor 00:16:31.931 [2024-10-15 08:27:29.154763] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:31.931 Running I/O for 1 seconds... 00:16:31.931 6805.00 IOPS, 26.58 MiB/s 00:16:31.931 Latency(us) 00:16:31.931 [2024-10-15T08:27:33.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.931 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:31.931 Verification LBA range: start 0x0 length 0x4000 00:16:31.931 NVMe0n1 : 1.01 6872.11 26.84 0.00 0.00 18550.80 2323.55 15609.48 00:16:31.931 [2024-10-15T08:27:33.662Z] =================================================================================================================== 00:16:31.931 [2024-10-15T08:27:33.662Z] Total : 6872.11 26.84 0.00 0.00 18550.80 2323.55 15609.48 00:16:31.931 08:27:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:31.931 08:27:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:16:32.499 08:27:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:32.499 08:27:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:32.499 08:27:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:16:33.066 08:27:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:33.324 08:27:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:16:36.609 08:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:36.610 08:27:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:16:36.610 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75751 00:16:36.610 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75751 ']' 00:16:36.610 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75751 00:16:36.610 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:16:36.610 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:36.610 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75751 00:16:36.610 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:36.610 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:36.610 killing process with pid 75751 00:16:36.610 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75751' 00:16:36.610 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75751 00:16:36.610 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75751 00:16:36.869 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:16:36.869 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:37.128 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:37.128 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:37.128 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:16:37.128 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:37.128 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:16:37.128 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:37.128 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:16:37.128 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:37.128 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:37.128 rmmod nvme_tcp 00:16:37.386 rmmod nvme_fabrics 00:16:37.386 rmmod nvme_keyring 00:16:37.386 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:37.386 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:16:37.386 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:16:37.386 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 75508 ']' 00:16:37.386 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 75508 00:16:37.386 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75508 ']' 00:16:37.386 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75508 00:16:37.386 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:16:37.386 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:37.386 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75508 00:16:37.386 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:37.386 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:37.386 killing process with pid 75508 00:16:37.386 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75508' 00:16:37.386 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75508 00:16:37.386 08:27:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75508 00:16:37.655 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:37.655 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:37.655 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:37.656 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:16:37.656 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:16:37.656 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:37.656 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:16:37.656 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:37.656 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:37.656 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:37.656 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:37.656 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:37.656 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:37.656 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:37.656 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:37.656 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:37.656 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:37.656 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:37.656 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:37.656 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:37.914 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:37.914 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:37.914 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:37.914 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.914 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:37.914 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.914 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:16:37.914 00:16:37.914 real 0m32.628s 00:16:37.914 user 2m5.370s 00:16:37.914 sys 0m5.864s 00:16:37.914 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:37.914 08:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:37.914 ************************************ 00:16:37.914 END TEST nvmf_failover 00:16:37.914 ************************************ 00:16:37.914 08:27:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:37.914 08:27:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:37.914 08:27:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:37.914 08:27:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.914 ************************************ 00:16:37.914 START TEST nvmf_host_discovery 00:16:37.914 ************************************ 00:16:37.914 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:37.914 * Looking for test storage... 00:16:37.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:37.914 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:37.914 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:16:37.914 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:38.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.174 --rc genhtml_branch_coverage=1 00:16:38.174 --rc genhtml_function_coverage=1 00:16:38.174 --rc genhtml_legend=1 00:16:38.174 --rc geninfo_all_blocks=1 00:16:38.174 --rc geninfo_unexecuted_blocks=1 00:16:38.174 00:16:38.174 ' 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:38.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.174 --rc genhtml_branch_coverage=1 00:16:38.174 --rc genhtml_function_coverage=1 00:16:38.174 --rc genhtml_legend=1 00:16:38.174 --rc geninfo_all_blocks=1 00:16:38.174 --rc geninfo_unexecuted_blocks=1 00:16:38.174 00:16:38.174 ' 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:38.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.174 --rc genhtml_branch_coverage=1 00:16:38.174 --rc genhtml_function_coverage=1 00:16:38.174 --rc genhtml_legend=1 00:16:38.174 --rc geninfo_all_blocks=1 00:16:38.174 --rc geninfo_unexecuted_blocks=1 00:16:38.174 00:16:38.174 ' 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:38.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.174 --rc genhtml_branch_coverage=1 00:16:38.174 --rc genhtml_function_coverage=1 00:16:38.174 --rc genhtml_legend=1 00:16:38.174 --rc geninfo_all_blocks=1 00:16:38.174 --rc geninfo_unexecuted_blocks=1 00:16:38.174 00:16:38.174 ' 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.174 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:38.175 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@458 -- # nvmf_veth_init 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:38.175 Cannot find device "nvmf_init_br" 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:38.175 Cannot find device "nvmf_init_br2" 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:38.175 Cannot find device "nvmf_tgt_br" 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:38.175 Cannot find device "nvmf_tgt_br2" 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:38.175 Cannot find device "nvmf_init_br" 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:38.175 Cannot find device "nvmf_init_br2" 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:38.175 Cannot find device "nvmf_tgt_br" 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:38.175 Cannot find device "nvmf_tgt_br2" 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:38.175 Cannot find device "nvmf_br" 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:38.175 Cannot find device "nvmf_init_if" 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:38.175 Cannot find device "nvmf_init_if2" 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:38.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:38.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:38.175 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:38.435 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:38.435 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:38.435 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:38.435 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:38.435 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:38.435 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:38.435 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:38.435 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:38.435 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:38.435 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:38.435 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:38.435 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:38.435 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:38.435 08:27:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:38.435 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:38.435 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:16:38.435 00:16:38.435 --- 10.0.0.3 ping statistics --- 00:16:38.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.435 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:38.435 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:38.435 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:16:38.435 00:16:38.435 --- 10.0.0.4 ping statistics --- 00:16:38.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.435 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:38.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:38.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:16:38.435 00:16:38.435 --- 10.0.0.1 ping statistics --- 00:16:38.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.435 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:38.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:38.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:16:38.435 00:16:38.435 --- 10.0.0.2 ping statistics --- 00:16:38.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.435 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # return 0 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=76164 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 76164 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 76164 ']' 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:38.435 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.694 [2024-10-15 08:27:40.209801] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:16:38.694 [2024-10-15 08:27:40.209885] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.694 [2024-10-15 08:27:40.348561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.953 [2024-10-15 08:27:40.438612] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:38.953 [2024-10-15 08:27:40.438681] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:38.953 [2024-10-15 08:27:40.438696] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:38.953 [2024-10-15 08:27:40.438707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:38.953 [2024-10-15 08:27:40.438716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:38.953 [2024-10-15 08:27:40.439341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.953 [2024-10-15 08:27:40.517412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.953 [2024-10-15 08:27:40.645770] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.953 [2024-10-15 08:27:40.653963] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.953 null0 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.953 null1 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76183 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:38.953 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76183 /tmp/host.sock 00:16:39.211 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 76183 ']' 00:16:39.211 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:16:39.211 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:39.211 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:39.212 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:39.212 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:39.212 08:27:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.212 [2024-10-15 08:27:40.752711] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:16:39.212 [2024-10-15 08:27:40.752886] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76183 ] 00:16:39.212 [2024-10-15 08:27:40.904038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.469 [2024-10-15 08:27:40.992720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.469 [2024-10-15 08:27:41.070204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:39.469 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:39.469 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:16:39.469 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:39.469 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:39.469 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.469 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.469 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.469 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:39.469 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.469 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.469 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.469 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:39.470 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:39.470 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:39.470 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.470 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:39.470 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.470 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:39.470 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:39.470 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.727 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:39.727 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:39.727 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:39.727 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:39.728 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.986 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:39.986 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:39.986 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:39.986 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:39.986 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:39.986 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.986 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:39.986 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.986 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.986 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:39.986 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.987 [2024-10-15 08:27:41.542109] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.987 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.245 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.245 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:40.245 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:40.245 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:40.245 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:40.245 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:40.245 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:40.245 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:40.245 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:40.245 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:40.245 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.245 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:40.245 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.245 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.246 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:16:40.246 08:27:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:16:40.504 [2024-10-15 08:27:42.179134] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:40.504 [2024-10-15 08:27:42.179181] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:40.504 [2024-10-15 08:27:42.179203] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:40.504 [2024-10-15 08:27:42.185178] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:40.763 [2024-10-15 08:27:42.242879] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:40.763 [2024-10-15 08:27:42.242925] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:41.330 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:41.331 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:41.331 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:41.331 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:41.331 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:41.331 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:41.331 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:41.331 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.331 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.331 08:27:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.331 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:41.331 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:41.331 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:41.331 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:41.331 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:41.331 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.331 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.331 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.331 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:41.331 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:41.331 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:41.331 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:41.331 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:41.331 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:41.331 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:41.331 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:41.331 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.331 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:41.331 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.331 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.590 [2024-10-15 08:27:43.184277] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:41.590 [2024-10-15 08:27:43.185482] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:41.590 [2024-10-15 08:27:43.185527] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:41.590 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:41.591 [2024-10-15 08:27:43.191443] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:41.591 [2024-10-15 08:27:43.253133] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:41.591 [2024-10-15 08:27:43.253165] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:41.591 [2024-10-15 08:27:43.253174] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:41.591 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.850 [2024-10-15 08:27:43.429384] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:41.850 [2024-10-15 08:27:43.429427] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:41.850 [2024-10-15 08:27:43.429735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.850 [2024-10-15 08:27:43.429773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.850 [2024-10-15 08:27:43.429804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.850 [2024-10-15 08:27:43.429814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.850 [2024-10-15 08:27:43.429825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.850 [2024-10-15 08:27:43.429834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.850 [2024-10-15 08:27:43.429845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.850 [2024-10-15 08:27:43.429855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.850 [2024-10-15 08:27:43.429865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aca950 is same with the state(6) to be set 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:41.850 [2024-10-15 08:27:43.435369] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io. 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:41.850 spdk:cnode0:10.0.0.3:4420 not found 00:16:41.850 [2024-10-15 08:27:43.435547] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:41.850 [2024-10-15 08:27:43.435634] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aca950 (9): Bad file descriptor 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:41.850 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:42.109 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.368 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:42.368 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:42.368 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:42.368 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:42.368 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:42.368 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.368 08:27:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.306 [2024-10-15 08:27:44.869914] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:43.306 [2024-10-15 08:27:44.869960] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:43.306 [2024-10-15 08:27:44.869980] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:43.306 [2024-10-15 08:27:44.875947] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:16:43.306 [2024-10-15 08:27:44.937879] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:43.306 [2024-10-15 08:27:44.938175] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.306 request: 00:16:43.306 { 00:16:43.306 "name": "nvme", 00:16:43.306 "trtype": "tcp", 00:16:43.306 "traddr": "10.0.0.3", 00:16:43.306 "adrfam": "ipv4", 00:16:43.306 "trsvcid": "8009", 00:16:43.306 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:43.306 "wait_for_attach": true, 00:16:43.306 "method": "bdev_nvme_start_discovery", 00:16:43.306 "req_id": 1 00:16:43.306 } 00:16:43.306 Got JSON-RPC error response 00:16:43.306 response: 00:16:43.306 { 00:16:43.306 "code": -17, 00:16:43.306 "message": "File exists" 00:16:43.306 } 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:43.306 08:27:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.306 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:43.306 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:43.306 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:43.306 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:43.306 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.306 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.306 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:43.306 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:43.565 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.565 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:43.565 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:43.565 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:16:43.565 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:43.565 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:43.565 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:43.565 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:43.565 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:43.565 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:43.565 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.565 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.565 request: 00:16:43.565 { 00:16:43.565 "name": "nvme_second", 00:16:43.565 "trtype": "tcp", 00:16:43.565 "traddr": "10.0.0.3", 00:16:43.565 "adrfam": "ipv4", 00:16:43.565 "trsvcid": "8009", 00:16:43.565 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:43.565 "wait_for_attach": true, 00:16:43.566 "method": "bdev_nvme_start_discovery", 00:16:43.566 "req_id": 1 00:16:43.566 } 00:16:43.566 Got JSON-RPC error response 00:16:43.566 response: 00:16:43.566 { 00:16:43.566 "code": -17, 00:16:43.566 "message": "File exists" 00:16:43.566 } 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.566 08:27:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.559 [2024-10-15 08:27:46.238766] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:44.559 [2024-10-15 08:27:46.238848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b398f0 with addr=10.0.0.3, port=8010 00:16:44.559 [2024-10-15 08:27:46.238878] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:44.559 [2024-10-15 08:27:46.238890] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:44.559 [2024-10-15 08:27:46.238900] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:45.937 [2024-10-15 08:27:47.238772] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:45.937 [2024-10-15 08:27:47.238838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b398f0 with addr=10.0.0.3, port=8010 00:16:45.937 [2024-10-15 08:27:47.238888] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:45.937 [2024-10-15 08:27:47.238900] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:45.937 [2024-10-15 08:27:47.238911] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:46.872 [2024-10-15 08:27:48.238600] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:16:46.872 request: 00:16:46.872 { 00:16:46.872 "name": "nvme_second", 00:16:46.872 "trtype": "tcp", 00:16:46.872 "traddr": "10.0.0.3", 00:16:46.872 "adrfam": "ipv4", 00:16:46.872 "trsvcid": "8010", 00:16:46.872 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:46.872 "wait_for_attach": false, 00:16:46.872 "attach_timeout_ms": 3000, 00:16:46.872 "method": "bdev_nvme_start_discovery", 00:16:46.872 "req_id": 1 00:16:46.872 } 00:16:46.872 Got JSON-RPC error response 00:16:46.872 response: 00:16:46.872 { 00:16:46.872 "code": -110, 00:16:46.872 "message": "Connection timed out" 00:16:46.872 } 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76183 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:46.872 rmmod nvme_tcp 00:16:46.872 rmmod nvme_fabrics 00:16:46.872 rmmod nvme_keyring 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 76164 ']' 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 76164 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 76164 ']' 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 76164 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76164 00:16:46.872 killing process with pid 76164 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76164' 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 76164 00:16:46.872 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 76164 00:16:47.131 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:47.131 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:47.131 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:47.131 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:16:47.131 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:16:47.131 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:47.131 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:16:47.131 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:47.131 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:47.131 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:47.131 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:47.131 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:47.131 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:47.131 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:47.131 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:47.131 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:47.131 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:47.131 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:47.390 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:47.390 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:47.390 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:47.390 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:47.390 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:47.390 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.390 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:47.390 08:27:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.390 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:16:47.390 ************************************ 00:16:47.390 00:16:47.390 real 0m9.483s 00:16:47.391 user 0m17.924s 00:16:47.391 sys 0m2.143s 00:16:47.391 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:47.391 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.391 END TEST nvmf_host_discovery 00:16:47.391 ************************************ 00:16:47.391 08:27:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:47.391 08:27:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:47.391 08:27:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:47.391 08:27:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.391 ************************************ 00:16:47.391 START TEST nvmf_host_multipath_status 00:16:47.391 ************************************ 00:16:47.391 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:47.651 * Looking for test storage... 00:16:47.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:47.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.651 --rc genhtml_branch_coverage=1 00:16:47.651 --rc genhtml_function_coverage=1 00:16:47.651 --rc genhtml_legend=1 00:16:47.651 --rc geninfo_all_blocks=1 00:16:47.651 --rc geninfo_unexecuted_blocks=1 00:16:47.651 00:16:47.651 ' 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:47.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.651 --rc genhtml_branch_coverage=1 00:16:47.651 --rc genhtml_function_coverage=1 00:16:47.651 --rc genhtml_legend=1 00:16:47.651 --rc geninfo_all_blocks=1 00:16:47.651 --rc geninfo_unexecuted_blocks=1 00:16:47.651 00:16:47.651 ' 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:47.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.651 --rc genhtml_branch_coverage=1 00:16:47.651 --rc genhtml_function_coverage=1 00:16:47.651 --rc genhtml_legend=1 00:16:47.651 --rc geninfo_all_blocks=1 00:16:47.651 --rc geninfo_unexecuted_blocks=1 00:16:47.651 00:16:47.651 ' 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:47.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.651 --rc genhtml_branch_coverage=1 00:16:47.651 --rc genhtml_function_coverage=1 00:16:47.651 --rc genhtml_legend=1 00:16:47.651 --rc geninfo_all_blocks=1 00:16:47.651 --rc geninfo_unexecuted_blocks=1 00:16:47.651 00:16:47.651 ' 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.651 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:47.652 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # nvmf_veth_init 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:47.652 Cannot find device "nvmf_init_br" 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:47.652 Cannot find device "nvmf_init_br2" 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:47.652 Cannot find device "nvmf_tgt_br" 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:47.652 Cannot find device "nvmf_tgt_br2" 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:47.652 Cannot find device "nvmf_init_br" 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:47.652 Cannot find device "nvmf_init_br2" 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:16:47.652 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:47.912 Cannot find device "nvmf_tgt_br" 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:47.912 Cannot find device "nvmf_tgt_br2" 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:47.912 Cannot find device "nvmf_br" 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:47.912 Cannot find device "nvmf_init_if" 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:47.912 Cannot find device "nvmf_init_if2" 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:47.912 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:47.912 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:47.912 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:48.233 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:48.233 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:16:48.233 00:16:48.233 --- 10.0.0.3 ping statistics --- 00:16:48.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.233 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:48.233 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:48.233 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:16:48.233 00:16:48.233 --- 10.0.0.4 ping statistics --- 00:16:48.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.233 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:48.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:48.233 00:16:48.233 --- 10.0.0.1 ping statistics --- 00:16:48.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.233 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:48.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:16:48.233 00:16:48.233 --- 10.0.0.2 ping statistics --- 00:16:48.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.233 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # return 0 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=76690 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 76690 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 76690 ']' 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:48.233 08:27:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:48.233 [2024-10-15 08:27:49.771678] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:16:48.233 [2024-10-15 08:27:49.771787] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.233 [2024-10-15 08:27:49.916023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:48.505 [2024-10-15 08:27:49.997178] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.505 [2024-10-15 08:27:49.997513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.505 [2024-10-15 08:27:49.997712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:48.505 [2024-10-15 08:27:49.997844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:48.505 [2024-10-15 08:27:49.997879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.505 [2024-10-15 08:27:49.999552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.505 [2024-10-15 08:27:49.999561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.505 [2024-10-15 08:27:50.078503] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:49.440 08:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:49.440 08:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:16:49.440 08:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:49.440 08:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:49.440 08:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:49.440 08:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.440 08:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76690 00:16:49.440 08:27:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:49.440 [2024-10-15 08:27:51.136333] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.440 08:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:50.008 Malloc0 00:16:50.008 08:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:50.270 08:27:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:50.529 08:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:50.529 [2024-10-15 08:27:52.253360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:50.789 08:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:51.048 [2024-10-15 08:27:52.541488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:51.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:51.048 08:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76746 00:16:51.048 08:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:51.048 08:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:51.048 08:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76746 /var/tmp/bdevperf.sock 00:16:51.048 08:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 76746 ']' 00:16:51.048 08:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:51.048 08:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:51.048 08:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:51.048 08:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:51.048 08:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:51.310 08:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:51.310 08:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:16:51.310 08:27:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:51.568 08:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:52.135 Nvme0n1 00:16:52.136 08:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:52.394 Nvme0n1 00:16:52.394 08:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:52.394 08:27:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:54.296 08:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:54.296 08:27:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:54.555 08:27:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:55.122 08:27:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:56.185 08:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:56.185 08:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:56.185 08:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:56.185 08:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:56.185 08:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:56.185 08:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:56.185 08:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:56.185 08:27:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:56.752 08:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:56.752 08:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:56.752 08:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:56.752 08:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:56.752 08:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:56.752 08:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:56.752 08:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:56.752 08:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.011 08:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:57.011 08:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:57.011 08:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.011 08:27:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:57.578 08:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:57.578 08:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:57.578 08:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:57.578 08:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.838 08:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:57.838 08:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:57.838 08:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:58.096 08:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:58.354 08:27:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:59.287 08:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:59.287 08:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:59.287 08:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.287 08:28:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:59.544 08:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:59.544 08:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:59.544 08:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:59.544 08:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.802 08:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:59.802 08:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:59.802 08:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:59.802 08:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:00.367 08:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:00.367 08:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:00.367 08:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.367 08:28:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:00.625 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:00.625 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:00.625 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:00.625 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.883 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:00.883 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:00.883 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.883 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:01.141 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:01.141 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:17:01.141 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:01.398 08:28:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:17:01.657 08:28:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:17:02.593 08:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:17:02.593 08:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:02.593 08:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.593 08:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:02.852 08:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.852 08:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:02.852 08:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:02.852 08:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:03.111 08:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:03.111 08:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:03.111 08:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:03.111 08:28:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:03.369 08:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:03.369 08:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:03.369 08:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:03.369 08:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:03.934 08:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:03.934 08:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:03.934 08:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:03.934 08:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:04.196 08:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:04.196 08:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:04.196 08:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:04.196 08:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:04.458 08:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:04.458 08:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:17:04.458 08:28:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:04.722 08:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:04.984 08:28:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:17:05.920 08:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:17:05.920 08:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:05.920 08:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.920 08:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:06.178 08:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:06.178 08:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:06.178 08:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:06.178 08:28:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:06.743 08:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:06.743 08:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:06.743 08:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:06.743 08:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:07.001 08:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:07.001 08:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:07.001 08:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:07.001 08:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:07.259 08:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:07.259 08:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:07.259 08:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:07.259 08:28:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:07.518 08:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:07.518 08:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:07.518 08:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:07.518 08:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:07.776 08:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:07.776 08:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:17:07.776 08:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:08.034 08:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:08.293 08:28:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:17:09.301 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:17:09.301 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:09.301 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:09.301 08:28:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:09.560 08:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:09.560 08:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:09.560 08:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:09.560 08:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:09.817 08:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:09.817 08:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:10.076 08:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:10.076 08:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:10.076 08:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:10.076 08:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:10.076 08:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:10.076 08:28:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:10.333 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:10.333 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:10.333 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:10.333 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:10.591 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:10.591 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:10.591 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:10.591 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:11.157 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:11.157 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:17:11.157 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:11.414 08:28:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:11.672 08:28:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:17:12.607 08:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:17:12.607 08:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:12.607 08:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.607 08:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:12.866 08:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:12.866 08:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:12.866 08:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.866 08:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:13.125 08:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:13.125 08:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:13.125 08:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:13.125 08:28:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:13.384 08:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:13.384 08:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:13.384 08:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:13.384 08:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:13.643 08:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:13.643 08:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:13.643 08:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:13.643 08:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:14.209 08:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:14.209 08:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:14.209 08:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:14.209 08:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:14.209 08:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:14.210 08:28:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:17:14.777 08:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:17:14.777 08:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:17:14.777 08:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:15.036 08:28:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:17:16.420 08:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:17:16.420 08:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:16.420 08:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:16.420 08:28:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:16.420 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:16.420 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:16.420 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:16.420 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:16.678 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:16.678 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:16.678 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:16.678 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:16.936 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:16.936 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:16.936 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:16.936 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:17.195 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:17.195 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:17.195 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:17.195 08:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:17.453 08:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:17.453 08:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:17.453 08:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:17.711 08:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:17.711 08:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:17.711 08:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:17:17.711 08:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:18.278 08:28:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:18.278 08:28:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:17:19.653 08:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:17:19.653 08:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:19.653 08:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:19.653 08:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:19.653 08:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:19.653 08:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:19.653 08:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:19.653 08:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:19.911 08:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:19.911 08:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:19.911 08:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:19.911 08:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:20.478 08:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:20.478 08:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:20.478 08:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:20.478 08:28:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:20.736 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:20.736 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:20.736 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:20.736 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:20.994 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:20.994 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:20.994 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:20.995 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:21.253 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:21.253 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:17:21.253 08:28:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:21.538 08:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:17:21.796 08:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:17:23.172 08:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:17:23.172 08:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:23.172 08:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:23.172 08:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:23.172 08:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:23.172 08:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:23.172 08:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:23.172 08:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:23.430 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:23.430 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:23.430 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:23.430 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:23.689 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:23.689 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:23.689 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:23.689 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:24.256 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:24.256 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:24.256 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:24.257 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:24.257 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:24.257 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:24.257 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:24.257 08:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:24.515 08:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:24.515 08:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:17:24.515 08:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:24.774 08:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:25.341 08:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:17:26.276 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:17:26.276 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:26.276 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:26.276 08:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:26.534 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:26.534 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:26.534 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:26.534 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:26.793 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:26.793 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:26.793 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:26.793 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:27.052 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:27.052 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:27.052 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:27.052 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:27.310 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:27.310 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:27.310 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:27.310 08:28:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:27.569 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:27.569 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:27.569 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:27.569 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:27.827 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:27.827 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76746 00:17:27.827 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 76746 ']' 00:17:27.827 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 76746 00:17:27.827 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:17:27.827 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:27.827 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76746 00:17:27.827 killing process with pid 76746 00:17:27.827 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:27.827 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:27.827 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76746' 00:17:27.827 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 76746 00:17:27.827 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 76746 00:17:27.827 { 00:17:27.827 "results": [ 00:17:27.827 { 00:17:27.827 "job": "Nvme0n1", 00:17:27.827 "core_mask": "0x4", 00:17:27.827 "workload": "verify", 00:17:27.827 "status": "terminated", 00:17:27.827 "verify_range": { 00:17:27.827 "start": 0, 00:17:27.827 "length": 16384 00:17:27.827 }, 00:17:27.827 "queue_depth": 128, 00:17:27.827 "io_size": 4096, 00:17:27.828 "runtime": 35.435661, 00:17:27.828 "iops": 8780.251058390022, 00:17:27.828 "mibps": 34.297855696836024, 00:17:27.828 "io_failed": 0, 00:17:27.828 "io_timeout": 0, 00:17:27.828 "avg_latency_us": 14546.397422496124, 00:17:27.828 "min_latency_us": 640.4654545454546, 00:17:27.828 "max_latency_us": 4026531.84 00:17:27.828 } 00:17:27.828 ], 00:17:27.828 "core_count": 1 00:17:27.828 } 00:17:28.093 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76746 00:17:28.093 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:28.093 [2024-10-15 08:27:52.613044] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:17:28.093 [2024-10-15 08:27:52.613196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76746 ] 00:17:28.093 [2024-10-15 08:27:52.748377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.093 [2024-10-15 08:27:52.824663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.093 [2024-10-15 08:27:52.899779] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:28.093 Running I/O for 90 seconds... 00:17:28.093 8888.00 IOPS, 34.72 MiB/s [2024-10-15T08:28:29.824Z] 9124.00 IOPS, 35.64 MiB/s [2024-10-15T08:28:29.824Z] 9192.00 IOPS, 35.91 MiB/s [2024-10-15T08:28:29.824Z] 9218.00 IOPS, 36.01 MiB/s [2024-10-15T08:28:29.824Z] 9236.80 IOPS, 36.08 MiB/s [2024-10-15T08:28:29.824Z] 9242.17 IOPS, 36.10 MiB/s [2024-10-15T08:28:29.824Z] 9248.71 IOPS, 36.13 MiB/s [2024-10-15T08:28:29.824Z] 9251.62 IOPS, 36.14 MiB/s [2024-10-15T08:28:29.824Z] 9255.67 IOPS, 36.15 MiB/s [2024-10-15T08:28:29.824Z] 9273.20 IOPS, 36.22 MiB/s [2024-10-15T08:28:29.824Z] 9282.55 IOPS, 36.26 MiB/s [2024-10-15T08:28:29.824Z] 9292.00 IOPS, 36.30 MiB/s [2024-10-15T08:28:29.824Z] 9290.77 IOPS, 36.29 MiB/s [2024-10-15T08:28:29.824Z] 9288.86 IOPS, 36.28 MiB/s [2024-10-15T08:28:29.824Z] 9288.80 IOPS, 36.28 MiB/s [2024-10-15T08:28:29.824Z] [2024-10-15 08:28:09.590052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.093 [2024-10-15 08:28:09.590170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:28.093 [2024-10-15 08:28:09.590247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:105576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.093 [2024-10-15 08:28:09.590270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:28.093 [2024-10-15 08:28:09.590295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.093 [2024-10-15 08:28:09.590312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:28.093 [2024-10-15 08:28:09.590334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.093 [2024-10-15 08:28:09.590348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:28.093 [2024-10-15 08:28:09.590381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.093 [2024-10-15 08:28:09.590396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:28.093 [2024-10-15 08:28:09.590419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.093 [2024-10-15 08:28:09.590434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:28.093 [2024-10-15 08:28:09.590456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.093 [2024-10-15 08:28:09.590472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:28.093 [2024-10-15 08:28:09.590494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.093 [2024-10-15 08:28:09.590512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:28.093 [2024-10-15 08:28:09.590534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.093 [2024-10-15 08:28:09.590613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:28.093 [2024-10-15 08:28:09.590639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.093 [2024-10-15 08:28:09.590655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:28.093 [2024-10-15 08:28:09.590677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.093 [2024-10-15 08:28:09.590692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:28.093 [2024-10-15 08:28:09.590713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.093 [2024-10-15 08:28:09.590727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:28.093 [2024-10-15 08:28:09.590749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.093 [2024-10-15 08:28:09.590764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:28.093 [2024-10-15 08:28:09.590785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.093 [2024-10-15 08:28:09.590800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:28.093 [2024-10-15 08:28:09.590823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.093 [2024-10-15 08:28:09.590838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.093 [2024-10-15 08:28:09.590859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.093 [2024-10-15 08:28:09.590874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.093 [2024-10-15 08:28:09.591180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.093 [2024-10-15 08:28:09.591208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:28.093 [2024-10-15 08:28:09.591236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.093 [2024-10-15 08:28:09.591254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:28.093 [2024-10-15 08:28:09.591278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.093 [2024-10-15 08:28:09.591294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:28.093 [2024-10-15 08:28:09.591316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.093 [2024-10-15 08:28:09.591331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:28.093 [2024-10-15 08:28:09.591353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.093 [2024-10-15 08:28:09.591368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.591406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.094 [2024-10-15 08:28:09.591424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.591446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.094 [2024-10-15 08:28:09.591462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.591484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.094 [2024-10-15 08:28:09.591501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.591524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.094 [2024-10-15 08:28:09.591539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.591562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.094 [2024-10-15 08:28:09.591578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.591601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.094 [2024-10-15 08:28:09.591617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.591640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.094 [2024-10-15 08:28:09.591655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.591678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.094 [2024-10-15 08:28:09.591693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.591716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.094 [2024-10-15 08:28:09.591731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.591754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.094 [2024-10-15 08:28:09.591769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.591792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.094 [2024-10-15 08:28:09.591807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.591835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.094 [2024-10-15 08:28:09.591850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.591884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.094 [2024-10-15 08:28:09.591902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.591924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.094 [2024-10-15 08:28:09.591940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.591962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.094 [2024-10-15 08:28:09.591977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.094 [2024-10-15 08:28:09.592016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.094 [2024-10-15 08:28:09.592054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.094 [2024-10-15 08:28:09.592092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.094 [2024-10-15 08:28:09.592146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.094 [2024-10-15 08:28:09.592185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.094 [2024-10-15 08:28:09.592224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.094 [2024-10-15 08:28:09.592262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.094 [2024-10-15 08:28:09.592300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.094 [2024-10-15 08:28:09.592339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.094 [2024-10-15 08:28:09.592386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.094 [2024-10-15 08:28:09.592426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.094 [2024-10-15 08:28:09.592464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.094 [2024-10-15 08:28:09.592509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.094 [2024-10-15 08:28:09.592548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.094 [2024-10-15 08:28:09.592586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.094 [2024-10-15 08:28:09.592626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.094 [2024-10-15 08:28:09.592664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.094 [2024-10-15 08:28:09.592702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.094 [2024-10-15 08:28:09.592740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.094 [2024-10-15 08:28:09.592778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:105184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.094 [2024-10-15 08:28:09.592820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.094 [2024-10-15 08:28:09.592866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.094 [2024-10-15 08:28:09.592905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.094 [2024-10-15 08:28:09.592944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.592966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.094 [2024-10-15 08:28:09.592981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:28.094 [2024-10-15 08:28:09.593004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.094 [2024-10-15 08:28:09.593019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.593058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.593096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.593150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.593189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.593227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.593266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.593304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.593350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.593400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.593441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.095 [2024-10-15 08:28:09.593503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.095 [2024-10-15 08:28:09.593543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.095 [2024-10-15 08:28:09.593581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.095 [2024-10-15 08:28:09.593620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.095 [2024-10-15 08:28:09.593658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.095 [2024-10-15 08:28:09.593695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.095 [2024-10-15 08:28:09.593733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.095 [2024-10-15 08:28:09.593770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.095 [2024-10-15 08:28:09.593808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.095 [2024-10-15 08:28:09.593846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.095 [2024-10-15 08:28:09.593895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.095 [2024-10-15 08:28:09.593944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.593967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.095 [2024-10-15 08:28:09.593982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.594005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.095 [2024-10-15 08:28:09.594020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.594043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.095 [2024-10-15 08:28:09.594058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.594082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.095 [2024-10-15 08:28:09.594098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.594146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.594166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.594190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.594206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.594228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.594244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.594267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.594282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.594305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.594321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.594343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:105352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.594358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.594392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.594409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.594432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.594447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.594470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.594486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.594508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.594524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.594546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.594562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.594584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.594599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.594622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.594637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.594666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.594681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:28.095 [2024-10-15 08:28:09.594704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.095 [2024-10-15 08:28:09.594719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.594742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:09.594759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.594786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.096 [2024-10-15 08:28:09.594803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.594826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.096 [2024-10-15 08:28:09.594841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.594864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.096 [2024-10-15 08:28:09.594894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.594918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.096 [2024-10-15 08:28:09.594934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.594961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.096 [2024-10-15 08:28:09.594976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.594999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.096 [2024-10-15 08:28:09.595014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.595037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.096 [2024-10-15 08:28:09.595052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.595074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.096 [2024-10-15 08:28:09.595090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.595112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:09.595140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.595164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:09.595180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.595203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:09.595219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.595241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:09.595257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.595279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:09.595294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.595317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:09.595333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.595356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:09.595380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.595405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:09.595439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.595469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:09.595487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.595510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:09.595525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.595548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:09.595564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.595586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:09.595601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.595624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:09.595639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.595666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:09.595696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.595720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:09.595735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:09.595758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:09.595774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:28.096 9012.25 IOPS, 35.20 MiB/s [2024-10-15T08:28:29.827Z] 8482.12 IOPS, 33.13 MiB/s [2024-10-15T08:28:29.827Z] 8010.89 IOPS, 31.29 MiB/s [2024-10-15T08:28:29.827Z] 7589.26 IOPS, 29.65 MiB/s [2024-10-15T08:28:29.827Z] 7430.85 IOPS, 29.03 MiB/s [2024-10-15T08:28:29.827Z] 7523.19 IOPS, 29.39 MiB/s [2024-10-15T08:28:29.827Z] 7607.23 IOPS, 29.72 MiB/s [2024-10-15T08:28:29.827Z] 7770.13 IOPS, 30.35 MiB/s [2024-10-15T08:28:29.827Z] 7962.79 IOPS, 31.10 MiB/s [2024-10-15T08:28:29.827Z] 8144.48 IOPS, 31.81 MiB/s [2024-10-15T08:28:29.827Z] 8265.04 IOPS, 32.29 MiB/s [2024-10-15T08:28:29.827Z] 8299.37 IOPS, 32.42 MiB/s [2024-10-15T08:28:29.827Z] 8327.25 IOPS, 32.53 MiB/s [2024-10-15T08:28:29.827Z] 8351.00 IOPS, 32.62 MiB/s [2024-10-15T08:28:29.827Z] 8443.97 IOPS, 32.98 MiB/s [2024-10-15T08:28:29.827Z] 8577.58 IOPS, 33.51 MiB/s [2024-10-15T08:28:29.827Z] 8656.59 IOPS, 33.81 MiB/s [2024-10-15T08:28:29.827Z] [2024-10-15 08:28:26.775052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:26.775182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:26.775270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:58336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:26.775291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:26.775314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:26.775329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:26.775369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.096 [2024-10-15 08:28:26.775386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:26.775407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.096 [2024-10-15 08:28:26.775423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:26.775444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.096 [2024-10-15 08:28:26.775459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:26.775480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.096 [2024-10-15 08:28:26.775494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:26.775516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.096 [2024-10-15 08:28:26.775545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:26.775566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.096 [2024-10-15 08:28:26.775580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:26.775601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.096 [2024-10-15 08:28:26.775615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:26.775636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:26.775649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:26.775670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:26.775684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:28.096 [2024-10-15 08:28:26.775705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.096 [2024-10-15 08:28:26.775719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.775751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:58512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.775768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.775789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.775803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.775824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.775838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.775859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:58256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.775873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.775919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.775940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.775963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:58328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.775978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.775999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:58360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.776013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.776035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.776049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.776070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.097 [2024-10-15 08:28:26.776085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.776105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.097 [2024-10-15 08:28:26.776120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.776160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.097 [2024-10-15 08:28:26.776194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.776216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.097 [2024-10-15 08:28:26.776231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.776253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.097 [2024-10-15 08:28:26.776280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.776303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.097 [2024-10-15 08:28:26.776318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.776340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.097 [2024-10-15 08:28:26.776354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.776376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.097 [2024-10-15 08:28:26.776391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.776412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.097 [2024-10-15 08:28:26.776427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.776448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.776464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.776486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.776500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.776522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.776536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.776573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.776589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.776610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.776624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.776645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.776659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.776680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.776695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.776715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.776738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.776760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:58648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.776775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.776796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.776810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.776831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.776846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.778194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.778226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.778254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.778279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.778301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.778316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.778338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.778353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.778375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.778390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.778411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.778427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:28.097 [2024-10-15 08:28:26.778449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.097 [2024-10-15 08:28:26.778464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.778485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.778500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.778522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.778538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.778573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.778590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.778611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.778626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.778648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.778663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.778684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.778699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.778720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.778735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.778756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.098 [2024-10-15 08:28:26.778772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.778812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.778833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.778856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.778871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.778893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.778908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.778929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.778944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.778966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.098 [2024-10-15 08:28:26.778981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.779002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:58512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.098 [2024-10-15 08:28:26.779017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.779049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.098 [2024-10-15 08:28:26.779067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.779089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.098 [2024-10-15 08:28:26.779104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.779141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.098 [2024-10-15 08:28:26.779158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.779182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.779197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.779219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.779234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.779256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.779271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.779292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.779307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.779332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.779349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.779370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.098 [2024-10-15 08:28:26.779385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.779407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.098 [2024-10-15 08:28:26.779422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.779443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.098 [2024-10-15 08:28:26.779458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.779480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.098 [2024-10-15 08:28:26.779495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.779516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.098 [2024-10-15 08:28:26.779540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.781249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.098 [2024-10-15 08:28:26.781281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.781310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.781328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.781349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.781365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.781387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.781404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.781426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.781441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.781463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.781478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.781500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.781516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.781537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.781553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.781575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.098 [2024-10-15 08:28:26.781590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.781611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.098 [2024-10-15 08:28:26.781627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.781649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.098 [2024-10-15 08:28:26.781664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.781686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.098 [2024-10-15 08:28:26.781715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.781743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.781760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:28.098 [2024-10-15 08:28:26.781781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.098 [2024-10-15 08:28:26.781798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.781820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.781835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.781856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.781871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.781893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.781909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.781930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.781945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.781967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.099 [2024-10-15 08:28:26.781982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.782003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.099 [2024-10-15 08:28:26.782018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.782039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.099 [2024-10-15 08:28:26.782054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.782076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.782091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.782112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.782154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.782177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.099 [2024-10-15 08:28:26.782193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.782224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.099 [2024-10-15 08:28:26.782242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.782263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.099 [2024-10-15 08:28:26.782278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.782300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.099 [2024-10-15 08:28:26.782318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.782340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.099 [2024-10-15 08:28:26.782356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.782377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.099 [2024-10-15 08:28:26.782393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.782414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.782430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.782451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.782466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.782487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.782502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.782523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.782538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.782559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.782574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.782595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.782610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.782631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.099 [2024-10-15 08:28:26.782646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.782675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.099 [2024-10-15 08:28:26.782691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.782712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.099 [2024-10-15 08:28:26.782727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.782748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.782764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.782802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.782822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.782845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:59000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.099 [2024-10-15 08:28:26.782860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.784432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.784462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.784491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.784508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.784530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.784545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.784566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.784582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.784603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.784619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.784640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.099 [2024-10-15 08:28:26.784655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.784676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.784692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.784714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.784762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.785251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.785280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.785309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.785326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.785347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:58568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.099 [2024-10-15 08:28:26.785362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.785384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.099 [2024-10-15 08:28:26.785399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.785421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.785435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:28.099 [2024-10-15 08:28:26.785457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.099 [2024-10-15 08:28:26.785472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.785493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.100 [2024-10-15 08:28:26.785509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.785530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.100 [2024-10-15 08:28:26.785545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.785566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.100 [2024-10-15 08:28:26.785581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.785604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.100 [2024-10-15 08:28:26.785619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.785640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.100 [2024-10-15 08:28:26.785655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.785676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.100 [2024-10-15 08:28:26.785704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.785728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.100 [2024-10-15 08:28:26.785744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.785766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.100 [2024-10-15 08:28:26.785781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.785820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.100 [2024-10-15 08:28:26.785841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.785864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.100 [2024-10-15 08:28:26.785879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.785901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.100 [2024-10-15 08:28:26.785916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.785938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.100 [2024-10-15 08:28:26.785953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.785974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.100 [2024-10-15 08:28:26.785989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.786010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.100 [2024-10-15 08:28:26.786025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.786046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.100 [2024-10-15 08:28:26.786061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.786082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.100 [2024-10-15 08:28:26.786097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.786141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.100 [2024-10-15 08:28:26.786162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.786184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.100 [2024-10-15 08:28:26.786200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.786232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.100 [2024-10-15 08:28:26.786248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.786271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.100 [2024-10-15 08:28:26.786286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.786306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.100 [2024-10-15 08:28:26.786321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.786342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.100 [2024-10-15 08:28:26.786357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.786378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.100 [2024-10-15 08:28:26.786393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.787573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.100 [2024-10-15 08:28:26.787602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.787629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.100 [2024-10-15 08:28:26.787646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.787667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.100 [2024-10-15 08:28:26.787682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.787703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.100 [2024-10-15 08:28:26.787718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.787739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.100 [2024-10-15 08:28:26.787754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.787788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.100 [2024-10-15 08:28:26.787804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.787826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.100 [2024-10-15 08:28:26.787840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.787878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.100 [2024-10-15 08:28:26.787895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.787917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.100 [2024-10-15 08:28:26.787932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.787953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.100 [2024-10-15 08:28:26.787968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.787989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.100 [2024-10-15 08:28:26.788003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.788024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.100 [2024-10-15 08:28:26.788039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.788061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.100 [2024-10-15 08:28:26.788076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.788097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.100 [2024-10-15 08:28:26.788112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.788150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.100 [2024-10-15 08:28:26.788166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.788187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.100 [2024-10-15 08:28:26.788202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.788223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.100 [2024-10-15 08:28:26.788238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:28.100 [2024-10-15 08:28:26.789066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.100 [2024-10-15 08:28:26.789096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.101 [2024-10-15 08:28:26.789166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.101 [2024-10-15 08:28:26.789217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.101 [2024-10-15 08:28:26.789256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.101 [2024-10-15 08:28:26.789293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.101 [2024-10-15 08:28:26.789329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.101 [2024-10-15 08:28:26.789364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.101 [2024-10-15 08:28:26.789400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.101 [2024-10-15 08:28:26.789435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.101 [2024-10-15 08:28:26.789471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.101 [2024-10-15 08:28:26.789507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.101 [2024-10-15 08:28:26.789543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.101 [2024-10-15 08:28:26.789584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.101 [2024-10-15 08:28:26.789620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.101 [2024-10-15 08:28:26.789665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.101 [2024-10-15 08:28:26.789703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.101 [2024-10-15 08:28:26.789738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.101 [2024-10-15 08:28:26.789774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.101 [2024-10-15 08:28:26.789809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.101 [2024-10-15 08:28:26.789845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.101 [2024-10-15 08:28:26.789882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.101 [2024-10-15 08:28:26.789940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.101 [2024-10-15 08:28:26.789977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.789998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.101 [2024-10-15 08:28:26.790013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.790034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.101 [2024-10-15 08:28:26.790049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.790070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.101 [2024-10-15 08:28:26.790085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.792643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.101 [2024-10-15 08:28:26.792673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.792714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.101 [2024-10-15 08:28:26.792732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.792754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.101 [2024-10-15 08:28:26.792769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.792790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.101 [2024-10-15 08:28:26.792805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.792826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.101 [2024-10-15 08:28:26.792841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.792862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.101 [2024-10-15 08:28:26.792877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.792898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.101 [2024-10-15 08:28:26.792913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.792934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.101 [2024-10-15 08:28:26.792949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.792970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.101 [2024-10-15 08:28:26.792985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.793006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.101 [2024-10-15 08:28:26.793021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.793042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.101 [2024-10-15 08:28:26.793058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:28.101 [2024-10-15 08:28:26.793079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.102 [2024-10-15 08:28:26.793093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.102 [2024-10-15 08:28:26.793147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.102 [2024-10-15 08:28:26.793196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.102 [2024-10-15 08:28:26.793233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.102 [2024-10-15 08:28:26.793269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.102 [2024-10-15 08:28:26.793305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.102 [2024-10-15 08:28:26.793341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.102 [2024-10-15 08:28:26.793376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.102 [2024-10-15 08:28:26.793412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.102 [2024-10-15 08:28:26.793448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.102 [2024-10-15 08:28:26.793483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.102 [2024-10-15 08:28:26.793519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.102 [2024-10-15 08:28:26.793555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.102 [2024-10-15 08:28:26.793590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.102 [2024-10-15 08:28:26.793634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.102 [2024-10-15 08:28:26.793677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.102 [2024-10-15 08:28:26.793714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.102 [2024-10-15 08:28:26.793750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.102 [2024-10-15 08:28:26.793785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.102 [2024-10-15 08:28:26.793821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.102 [2024-10-15 08:28:26.793856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.102 [2024-10-15 08:28:26.793892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.102 [2024-10-15 08:28:26.793928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.102 [2024-10-15 08:28:26.793963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.793984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.102 [2024-10-15 08:28:26.793998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.794019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.102 [2024-10-15 08:28:26.794034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.794055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.102 [2024-10-15 08:28:26.794077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.794099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.102 [2024-10-15 08:28:26.794127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.794162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.102 [2024-10-15 08:28:26.794179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.794199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.102 [2024-10-15 08:28:26.794214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.794235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.102 [2024-10-15 08:28:26.794251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.796478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.102 [2024-10-15 08:28:26.796510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.796552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.102 [2024-10-15 08:28:26.796573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.796596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.102 [2024-10-15 08:28:26.796611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.796632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.102 [2024-10-15 08:28:26.796647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.796669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.102 [2024-10-15 08:28:26.796685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.796706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.102 [2024-10-15 08:28:26.796720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.796741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.102 [2024-10-15 08:28:26.796756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.796778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.102 [2024-10-15 08:28:26.796793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.796829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.102 [2024-10-15 08:28:26.796845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:28.102 [2024-10-15 08:28:26.796866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.102 [2024-10-15 08:28:26.796881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:28.103 [2024-10-15 08:28:26.796902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.103 [2024-10-15 08:28:26.796916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:28.103 [2024-10-15 08:28:26.796938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.103 [2024-10-15 08:28:26.796952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:28.103 [2024-10-15 08:28:26.796974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.103 [2024-10-15 08:28:26.796989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:28.103 8741.76 IOPS, 34.15 MiB/s [2024-10-15T08:28:29.834Z] 8757.59 IOPS, 34.21 MiB/s [2024-10-15T08:28:29.834Z] 8773.89 IOPS, 34.27 MiB/s [2024-10-15T08:28:29.834Z] Received shutdown signal, test time was about 35.436489 seconds 00:17:28.103 00:17:28.103 Latency(us) 00:17:28.103 [2024-10-15T08:28:29.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.103 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:28.103 Verification LBA range: start 0x0 length 0x4000 00:17:28.103 Nvme0n1 : 35.44 8780.25 34.30 0.00 0.00 14546.40 640.47 4026531.84 00:17:28.103 [2024-10-15T08:28:29.834Z] =================================================================================================================== 00:17:28.103 [2024-10-15T08:28:29.834Z] Total : 8780.25 34.30 0.00 0.00 14546.40 640.47 4026531.84 00:17:28.103 08:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.361 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:17:28.361 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:28.361 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:17:28.361 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:28.361 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:17:28.620 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:28.620 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:17:28.620 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:28.620 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:28.620 rmmod nvme_tcp 00:17:28.620 rmmod nvme_fabrics 00:17:28.620 rmmod nvme_keyring 00:17:28.620 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:28.620 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:17:28.621 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:17:28.621 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 76690 ']' 00:17:28.621 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 76690 00:17:28.621 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 76690 ']' 00:17:28.621 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 76690 00:17:28.621 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:17:28.621 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:28.621 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76690 00:17:28.621 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:28.621 killing process with pid 76690 00:17:28.621 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:28.621 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76690' 00:17:28.621 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 76690 00:17:28.621 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 76690 00:17:28.880 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:28.880 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:28.880 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:28.880 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:17:28.880 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:17:28.880 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:28.880 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:17:28.880 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:28.880 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:28.880 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:28.880 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:28.880 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:28.880 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:28.880 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:28.880 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:28.880 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:28.880 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:28.880 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:29.139 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:29.139 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:29.139 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:29.139 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:29.139 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:29.139 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.139 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.139 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.139 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:17:29.139 00:17:29.139 real 0m41.674s 00:17:29.139 user 2m14.154s 00:17:29.139 sys 0m12.568s 00:17:29.139 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:29.139 08:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:29.139 ************************************ 00:17:29.139 END TEST nvmf_host_multipath_status 00:17:29.139 ************************************ 00:17:29.139 08:28:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:29.139 08:28:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:29.139 08:28:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:29.139 08:28:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.139 ************************************ 00:17:29.139 START TEST nvmf_discovery_remove_ifc 00:17:29.139 ************************************ 00:17:29.139 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:29.399 * Looking for test storage... 00:17:29.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:29.399 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:29.399 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:17:29.399 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:29.399 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:29.399 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:29.399 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:29.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.400 --rc genhtml_branch_coverage=1 00:17:29.400 --rc genhtml_function_coverage=1 00:17:29.400 --rc genhtml_legend=1 00:17:29.400 --rc geninfo_all_blocks=1 00:17:29.400 --rc geninfo_unexecuted_blocks=1 00:17:29.400 00:17:29.400 ' 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:29.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.400 --rc genhtml_branch_coverage=1 00:17:29.400 --rc genhtml_function_coverage=1 00:17:29.400 --rc genhtml_legend=1 00:17:29.400 --rc geninfo_all_blocks=1 00:17:29.400 --rc geninfo_unexecuted_blocks=1 00:17:29.400 00:17:29.400 ' 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:29.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.400 --rc genhtml_branch_coverage=1 00:17:29.400 --rc genhtml_function_coverage=1 00:17:29.400 --rc genhtml_legend=1 00:17:29.400 --rc geninfo_all_blocks=1 00:17:29.400 --rc geninfo_unexecuted_blocks=1 00:17:29.400 00:17:29.400 ' 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:29.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.400 --rc genhtml_branch_coverage=1 00:17:29.400 --rc genhtml_function_coverage=1 00:17:29.400 --rc genhtml_legend=1 00:17:29.400 --rc geninfo_all_blocks=1 00:17:29.400 --rc geninfo_unexecuted_blocks=1 00:17:29.400 00:17:29.400 ' 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:17:29.400 08:28:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:29.400 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:17:29.400 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@458 -- # nvmf_veth_init 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:29.401 Cannot find device "nvmf_init_br" 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:29.401 Cannot find device "nvmf_init_br2" 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:29.401 Cannot find device "nvmf_tgt_br" 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:29.401 Cannot find device "nvmf_tgt_br2" 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:29.401 Cannot find device "nvmf_init_br" 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:29.401 Cannot find device "nvmf_init_br2" 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:29.401 Cannot find device "nvmf_tgt_br" 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:29.401 Cannot find device "nvmf_tgt_br2" 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:29.401 Cannot find device "nvmf_br" 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:29.401 Cannot find device "nvmf_init_if" 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:17:29.401 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:29.661 Cannot find device "nvmf_init_if2" 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:29.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:29.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:29.661 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:29.661 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:17:29.661 00:17:29.661 --- 10.0.0.3 ping statistics --- 00:17:29.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.661 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:29.661 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:29.661 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:17:29.661 00:17:29.661 --- 10.0.0.4 ping statistics --- 00:17:29.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.661 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:29.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:17:29.661 00:17:29.661 --- 10.0.0.1 ping statistics --- 00:17:29.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.661 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:29.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:17:29.661 00:17:29.661 --- 10.0.0.2 ping statistics --- 00:17:29.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.661 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # return 0 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:29.661 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:29.957 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:17:29.957 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:29.957 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:29.957 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:29.957 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=77607 00:17:29.957 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:29.957 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 77607 00:17:29.957 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 77607 ']' 00:17:29.957 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.957 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:29.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.957 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.957 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:29.957 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:29.957 [2024-10-15 08:28:31.479222] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:17:29.957 [2024-10-15 08:28:31.479334] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.957 [2024-10-15 08:28:31.622400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.216 [2024-10-15 08:28:31.702894] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.216 [2024-10-15 08:28:31.703001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.216 [2024-10-15 08:28:31.703016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.216 [2024-10-15 08:28:31.703027] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.216 [2024-10-15 08:28:31.703045] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.216 [2024-10-15 08:28:31.703580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.216 [2024-10-15 08:28:31.781559] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:30.216 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:30.216 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:17:30.216 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:30.216 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:30.216 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:30.216 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.216 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:17:30.216 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.216 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:30.216 [2024-10-15 08:28:31.921019] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.216 [2024-10-15 08:28:31.929220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:17:30.216 null0 00:17:30.475 [2024-10-15 08:28:31.961016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:30.475 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.475 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77626 00:17:30.475 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:17:30.475 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77626 /tmp/host.sock 00:17:30.475 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 77626 ']' 00:17:30.475 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:17:30.475 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:30.475 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:30.475 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:30.475 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:30.475 08:28:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:30.475 [2024-10-15 08:28:32.046032] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:17:30.475 [2024-10-15 08:28:32.046211] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77626 ] 00:17:30.475 [2024-10-15 08:28:32.189533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.734 [2024-10-15 08:28:32.275206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.734 08:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:30.734 08:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:17:30.734 08:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:30.734 08:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:17:30.734 08:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.734 08:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:30.734 08:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.734 08:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:17:30.734 08:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.734 08:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:30.734 [2024-10-15 08:28:32.396685] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:30.734 08:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.734 08:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:17:30.734 08:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.734 08:28:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:32.110 [2024-10-15 08:28:33.474506] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:32.110 [2024-10-15 08:28:33.474600] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:32.110 [2024-10-15 08:28:33.474624] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:32.110 [2024-10-15 08:28:33.480548] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:17:32.110 [2024-10-15 08:28:33.538198] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:32.110 [2024-10-15 08:28:33.538316] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:32.110 [2024-10-15 08:28:33.538349] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:32.110 [2024-10-15 08:28:33.538368] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:32.110 [2024-10-15 08:28:33.538400] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:32.110 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.110 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:17:32.110 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:32.110 [2024-10-15 08:28:33.543166] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xfc5400 was disconnected and freed. delete nvme_qpair. 00:17:32.110 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:32.110 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:32.110 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.110 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:32.110 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:32.110 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:32.110 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.110 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:32.111 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:17:32.111 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:32.111 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:32.111 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:32.111 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:32.111 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.111 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:32.111 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:32.111 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:32.111 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:32.111 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.111 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:32.111 08:28:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:33.046 08:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:33.046 08:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:33.046 08:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.046 08:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:33.046 08:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:33.046 08:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:33.046 08:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:33.046 08:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.046 08:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:33.046 08:28:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:34.422 08:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:34.422 08:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:34.422 08:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:34.422 08:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.422 08:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:34.422 08:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:34.422 08:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:34.422 08:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.422 08:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:34.422 08:28:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:35.360 08:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:35.360 08:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:35.360 08:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:35.360 08:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.360 08:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:35.360 08:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:35.360 08:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:35.360 08:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.360 08:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:35.360 08:28:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:36.298 08:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:36.298 08:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:36.298 08:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:36.298 08:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:36.298 08:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.298 08:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:36.298 08:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:36.298 08:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.298 08:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:36.298 08:28:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:37.234 08:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:37.234 08:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:37.234 08:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:37.234 08:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:37.234 08:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:37.234 08:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.234 08:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:37.234 08:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.492 [2024-10-15 08:28:38.965886] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:37.492 [2024-10-15 08:28:38.965959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:37.492 [2024-10-15 08:28:38.965976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:37.492 [2024-10-15 08:28:38.965991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:37.492 [2024-10-15 08:28:38.966001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:37.492 [2024-10-15 08:28:38.966011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:37.492 [2024-10-15 08:28:38.966021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:37.492 [2024-10-15 08:28:38.966031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:37.492 [2024-10-15 08:28:38.966040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:37.492 [2024-10-15 08:28:38.966051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:37.492 [2024-10-15 08:28:38.966060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:37.492 [2024-10-15 08:28:38.966070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf98f70 is same with the state(6) to be set 00:17:37.492 [2024-10-15 08:28:38.975881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf98f70 (9): Bad file descriptor 00:17:37.492 08:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:37.492 08:28:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:37.492 [2024-10-15 08:28:38.985903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:38.423 08:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:38.423 08:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:38.423 08:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.423 08:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:38.423 08:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:38.423 08:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:38.423 08:28:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:38.423 [2024-10-15 08:28:40.031257] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:38.423 [2024-10-15 08:28:40.031374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf98f70 with addr=10.0.0.3, port=4420 00:17:38.423 [2024-10-15 08:28:40.031413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf98f70 is same with the state(6) to be set 00:17:38.423 [2024-10-15 08:28:40.031488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf98f70 (9): Bad file descriptor 00:17:38.423 [2024-10-15 08:28:40.032409] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:38.423 [2024-10-15 08:28:40.032507] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:38.423 [2024-10-15 08:28:40.032535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:38.423 [2024-10-15 08:28:40.032558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:38.423 [2024-10-15 08:28:40.032628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.423 [2024-10-15 08:28:40.032654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:38.423 08:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.423 08:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:38.423 08:28:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:39.355 [2024-10-15 08:28:41.032721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:39.355 [2024-10-15 08:28:41.032832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:39.355 [2024-10-15 08:28:41.032862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:39.355 [2024-10-15 08:28:41.032874] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:17:39.355 [2024-10-15 08:28:41.032901] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:39.355 [2024-10-15 08:28:41.032937] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:17:39.355 [2024-10-15 08:28:41.033002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.355 [2024-10-15 08:28:41.033019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.355 [2024-10-15 08:28:41.033035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.355 [2024-10-15 08:28:41.033045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.355 [2024-10-15 08:28:41.033055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.355 [2024-10-15 08:28:41.033064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.355 [2024-10-15 08:28:41.033075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.355 [2024-10-15 08:28:41.033084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.355 [2024-10-15 08:28:41.033095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.355 [2024-10-15 08:28:41.033104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.355 [2024-10-15 08:28:41.033113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:17:39.355 [2024-10-15 08:28:41.033656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2dd70 (9): Bad file descriptor 00:17:39.355 [2024-10-15 08:28:41.034670] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:39.355 [2024-10-15 08:28:41.034696] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:17:39.355 08:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:39.355 08:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:39.355 08:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:39.355 08:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.355 08:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:39.355 08:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:39.355 08:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:39.355 08:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.613 08:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:39.613 08:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:39.613 08:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:39.613 08:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:39.613 08:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:39.613 08:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:39.613 08:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.613 08:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:39.613 08:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:39.613 08:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:39.613 08:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:39.613 08:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.613 08:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:39.613 08:28:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:40.545 08:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:40.545 08:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:40.546 08:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.546 08:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:40.546 08:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:40.546 08:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:40.546 08:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:40.546 08:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.546 08:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:40.546 08:28:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:41.480 [2024-10-15 08:28:43.039663] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:41.480 [2024-10-15 08:28:43.039700] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:41.480 [2024-10-15 08:28:43.039734] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:41.480 [2024-10-15 08:28:43.045700] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:17:41.480 [2024-10-15 08:28:43.102374] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:41.480 [2024-10-15 08:28:43.102443] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:41.480 [2024-10-15 08:28:43.102470] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:41.480 [2024-10-15 08:28:43.102488] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:17:41.480 [2024-10-15 08:28:43.102498] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:41.480 [2024-10-15 08:28:43.108158] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xfd1c30 was disconnected and freed. delete nvme_qpair. 00:17:41.739 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:41.739 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:41.739 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:41.739 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.739 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:41.739 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:41.739 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:41.739 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.739 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:41.739 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:41.739 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77626 00:17:41.739 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 77626 ']' 00:17:41.739 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 77626 00:17:41.739 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:17:41.739 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:41.739 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77626 00:17:41.739 killing process with pid 77626 00:17:41.739 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:41.739 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:41.739 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77626' 00:17:41.739 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 77626 00:17:41.739 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 77626 00:17:41.997 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:41.997 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:41.997 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:17:41.997 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:41.997 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:17:41.997 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:41.997 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:41.997 rmmod nvme_tcp 00:17:41.997 rmmod nvme_fabrics 00:17:41.997 rmmod nvme_keyring 00:17:41.997 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:41.997 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:17:41.997 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:17:41.997 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 77607 ']' 00:17:41.997 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 77607 00:17:41.997 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 77607 ']' 00:17:41.997 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 77607 00:17:41.997 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:17:41.997 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:41.997 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77607 00:17:42.256 killing process with pid 77607 00:17:42.256 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:42.256 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:42.256 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77607' 00:17:42.256 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 77607 00:17:42.256 08:28:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 77607 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.524 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:17:42.782 00:17:42.782 real 0m13.463s 00:17:42.782 user 0m22.668s 00:17:42.782 sys 0m2.615s 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:42.782 ************************************ 00:17:42.782 END TEST nvmf_discovery_remove_ifc 00:17:42.782 ************************************ 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.782 ************************************ 00:17:42.782 START TEST nvmf_identify_kernel_target 00:17:42.782 ************************************ 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:42.782 * Looking for test storage... 00:17:42.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:17:42.782 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:43.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.043 --rc genhtml_branch_coverage=1 00:17:43.043 --rc genhtml_function_coverage=1 00:17:43.043 --rc genhtml_legend=1 00:17:43.043 --rc geninfo_all_blocks=1 00:17:43.043 --rc geninfo_unexecuted_blocks=1 00:17:43.043 00:17:43.043 ' 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:43.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.043 --rc genhtml_branch_coverage=1 00:17:43.043 --rc genhtml_function_coverage=1 00:17:43.043 --rc genhtml_legend=1 00:17:43.043 --rc geninfo_all_blocks=1 00:17:43.043 --rc geninfo_unexecuted_blocks=1 00:17:43.043 00:17:43.043 ' 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:43.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.043 --rc genhtml_branch_coverage=1 00:17:43.043 --rc genhtml_function_coverage=1 00:17:43.043 --rc genhtml_legend=1 00:17:43.043 --rc geninfo_all_blocks=1 00:17:43.043 --rc geninfo_unexecuted_blocks=1 00:17:43.043 00:17:43.043 ' 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:43.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.043 --rc genhtml_branch_coverage=1 00:17:43.043 --rc genhtml_function_coverage=1 00:17:43.043 --rc genhtml_legend=1 00:17:43.043 --rc geninfo_all_blocks=1 00:17:43.043 --rc geninfo_unexecuted_blocks=1 00:17:43.043 00:17:43.043 ' 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.043 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:43.043 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # nvmf_veth_init 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:43.044 Cannot find device "nvmf_init_br" 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:43.044 Cannot find device "nvmf_init_br2" 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:43.044 Cannot find device "nvmf_tgt_br" 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:43.044 Cannot find device "nvmf_tgt_br2" 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:43.044 Cannot find device "nvmf_init_br" 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:43.044 Cannot find device "nvmf_init_br2" 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:43.044 Cannot find device "nvmf_tgt_br" 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:43.044 Cannot find device "nvmf_tgt_br2" 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:43.044 Cannot find device "nvmf_br" 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:43.044 Cannot find device "nvmf_init_if" 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:43.044 Cannot find device "nvmf_init_if2" 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:43.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:43.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:43.044 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:43.316 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:43.316 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:17:43.316 00:17:43.316 --- 10.0.0.3 ping statistics --- 00:17:43.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.316 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:43.316 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:43.316 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:17:43.316 00:17:43.316 --- 10.0.0.4 ping statistics --- 00:17:43.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.316 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:43.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:17:43.316 00:17:43.316 --- 10.0.0.1 ping statistics --- 00:17:43.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.316 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:43.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:17:43.316 00:17:43.316 --- 10.0.0.2 ping statistics --- 00:17:43.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.316 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # return 0 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:43.316 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:43.317 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.317 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.317 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:43.317 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.317 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:43.317 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:43.317 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:43.317 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:43.317 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:43.317 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:43.317 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:17:43.317 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:43.317 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:43.317 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:43.317 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:17:43.317 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:17:43.317 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:17:43.317 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:43.317 08:28:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:43.574 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:43.832 Waiting for block devices as requested 00:17:43.832 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:43.832 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:43.832 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:43.832 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:43.832 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:17:43.832 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:17:43.832 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:43.832 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:43.832 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:17:43.832 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:43.832 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:44.090 No valid GPT data, bailing 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n2 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n2 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:44.090 No valid GPT data, bailing 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n2 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n3 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n3 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:44.090 No valid GPT data, bailing 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n3 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme1n1 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme1n1 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:44.090 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:44.348 No valid GPT data, bailing 00:17:44.348 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:44.348 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:44.348 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:44.348 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme1n1 00:17:44.348 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme1n1 ]] 00:17:44.348 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:44.348 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:44.348 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:44.348 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:44.348 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:17:44.348 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme1n1 00:17:44.348 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:17:44.348 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:17:44.348 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:17:44.348 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:17:44.348 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:17:44.348 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:44.348 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -a 10.0.0.1 -t tcp -s 4420 00:17:44.348 00:17:44.348 Discovery Log Number of Records 2, Generation counter 2 00:17:44.348 =====Discovery Log Entry 0====== 00:17:44.348 trtype: tcp 00:17:44.348 adrfam: ipv4 00:17:44.348 subtype: current discovery subsystem 00:17:44.348 treq: not specified, sq flow control disable supported 00:17:44.348 portid: 1 00:17:44.348 trsvcid: 4420 00:17:44.348 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:44.348 traddr: 10.0.0.1 00:17:44.348 eflags: none 00:17:44.348 sectype: none 00:17:44.348 =====Discovery Log Entry 1====== 00:17:44.348 trtype: tcp 00:17:44.348 adrfam: ipv4 00:17:44.348 subtype: nvme subsystem 00:17:44.348 treq: not specified, sq flow control disable supported 00:17:44.348 portid: 1 00:17:44.348 trsvcid: 4420 00:17:44.348 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:44.348 traddr: 10.0.0.1 00:17:44.348 eflags: none 00:17:44.348 sectype: none 00:17:44.348 08:28:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:44.348 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:44.607 ===================================================== 00:17:44.607 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:44.607 ===================================================== 00:17:44.607 Controller Capabilities/Features 00:17:44.607 ================================ 00:17:44.607 Vendor ID: 0000 00:17:44.607 Subsystem Vendor ID: 0000 00:17:44.607 Serial Number: 9c9b87be3f66a4045425 00:17:44.607 Model Number: Linux 00:17:44.607 Firmware Version: 6.8.9-20 00:17:44.607 Recommended Arb Burst: 0 00:17:44.607 IEEE OUI Identifier: 00 00 00 00:17:44.607 Multi-path I/O 00:17:44.607 May have multiple subsystem ports: No 00:17:44.607 May have multiple controllers: No 00:17:44.607 Associated with SR-IOV VF: No 00:17:44.607 Max Data Transfer Size: Unlimited 00:17:44.607 Max Number of Namespaces: 0 00:17:44.607 Max Number of I/O Queues: 1024 00:17:44.607 NVMe Specification Version (VS): 1.3 00:17:44.607 NVMe Specification Version (Identify): 1.3 00:17:44.607 Maximum Queue Entries: 1024 00:17:44.607 Contiguous Queues Required: No 00:17:44.607 Arbitration Mechanisms Supported 00:17:44.607 Weighted Round Robin: Not Supported 00:17:44.607 Vendor Specific: Not Supported 00:17:44.607 Reset Timeout: 7500 ms 00:17:44.607 Doorbell Stride: 4 bytes 00:17:44.607 NVM Subsystem Reset: Not Supported 00:17:44.607 Command Sets Supported 00:17:44.607 NVM Command Set: Supported 00:17:44.607 Boot Partition: Not Supported 00:17:44.607 Memory Page Size Minimum: 4096 bytes 00:17:44.607 Memory Page Size Maximum: 4096 bytes 00:17:44.607 Persistent Memory Region: Not Supported 00:17:44.607 Optional Asynchronous Events Supported 00:17:44.607 Namespace Attribute Notices: Not Supported 00:17:44.608 Firmware Activation Notices: Not Supported 00:17:44.608 ANA Change Notices: Not Supported 00:17:44.608 PLE Aggregate Log Change Notices: Not Supported 00:17:44.608 LBA Status Info Alert Notices: Not Supported 00:17:44.608 EGE Aggregate Log Change Notices: Not Supported 00:17:44.608 Normal NVM Subsystem Shutdown event: Not Supported 00:17:44.608 Zone Descriptor Change Notices: Not Supported 00:17:44.608 Discovery Log Change Notices: Supported 00:17:44.608 Controller Attributes 00:17:44.608 128-bit Host Identifier: Not Supported 00:17:44.608 Non-Operational Permissive Mode: Not Supported 00:17:44.608 NVM Sets: Not Supported 00:17:44.608 Read Recovery Levels: Not Supported 00:17:44.608 Endurance Groups: Not Supported 00:17:44.608 Predictable Latency Mode: Not Supported 00:17:44.608 Traffic Based Keep ALive: Not Supported 00:17:44.608 Namespace Granularity: Not Supported 00:17:44.608 SQ Associations: Not Supported 00:17:44.608 UUID List: Not Supported 00:17:44.608 Multi-Domain Subsystem: Not Supported 00:17:44.608 Fixed Capacity Management: Not Supported 00:17:44.608 Variable Capacity Management: Not Supported 00:17:44.608 Delete Endurance Group: Not Supported 00:17:44.608 Delete NVM Set: Not Supported 00:17:44.608 Extended LBA Formats Supported: Not Supported 00:17:44.608 Flexible Data Placement Supported: Not Supported 00:17:44.608 00:17:44.608 Controller Memory Buffer Support 00:17:44.608 ================================ 00:17:44.608 Supported: No 00:17:44.608 00:17:44.608 Persistent Memory Region Support 00:17:44.608 ================================ 00:17:44.608 Supported: No 00:17:44.608 00:17:44.608 Admin Command Set Attributes 00:17:44.608 ============================ 00:17:44.608 Security Send/Receive: Not Supported 00:17:44.608 Format NVM: Not Supported 00:17:44.608 Firmware Activate/Download: Not Supported 00:17:44.608 Namespace Management: Not Supported 00:17:44.608 Device Self-Test: Not Supported 00:17:44.608 Directives: Not Supported 00:17:44.608 NVMe-MI: Not Supported 00:17:44.608 Virtualization Management: Not Supported 00:17:44.608 Doorbell Buffer Config: Not Supported 00:17:44.608 Get LBA Status Capability: Not Supported 00:17:44.608 Command & Feature Lockdown Capability: Not Supported 00:17:44.608 Abort Command Limit: 1 00:17:44.608 Async Event Request Limit: 1 00:17:44.608 Number of Firmware Slots: N/A 00:17:44.608 Firmware Slot 1 Read-Only: N/A 00:17:44.608 Firmware Activation Without Reset: N/A 00:17:44.608 Multiple Update Detection Support: N/A 00:17:44.608 Firmware Update Granularity: No Information Provided 00:17:44.608 Per-Namespace SMART Log: No 00:17:44.608 Asymmetric Namespace Access Log Page: Not Supported 00:17:44.608 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:44.608 Command Effects Log Page: Not Supported 00:17:44.608 Get Log Page Extended Data: Supported 00:17:44.608 Telemetry Log Pages: Not Supported 00:17:44.608 Persistent Event Log Pages: Not Supported 00:17:44.608 Supported Log Pages Log Page: May Support 00:17:44.608 Commands Supported & Effects Log Page: Not Supported 00:17:44.608 Feature Identifiers & Effects Log Page:May Support 00:17:44.608 NVMe-MI Commands & Effects Log Page: May Support 00:17:44.608 Data Area 4 for Telemetry Log: Not Supported 00:17:44.608 Error Log Page Entries Supported: 1 00:17:44.608 Keep Alive: Not Supported 00:17:44.608 00:17:44.608 NVM Command Set Attributes 00:17:44.608 ========================== 00:17:44.608 Submission Queue Entry Size 00:17:44.608 Max: 1 00:17:44.608 Min: 1 00:17:44.608 Completion Queue Entry Size 00:17:44.608 Max: 1 00:17:44.608 Min: 1 00:17:44.608 Number of Namespaces: 0 00:17:44.608 Compare Command: Not Supported 00:17:44.608 Write Uncorrectable Command: Not Supported 00:17:44.608 Dataset Management Command: Not Supported 00:17:44.608 Write Zeroes Command: Not Supported 00:17:44.608 Set Features Save Field: Not Supported 00:17:44.608 Reservations: Not Supported 00:17:44.608 Timestamp: Not Supported 00:17:44.608 Copy: Not Supported 00:17:44.608 Volatile Write Cache: Not Present 00:17:44.608 Atomic Write Unit (Normal): 1 00:17:44.608 Atomic Write Unit (PFail): 1 00:17:44.608 Atomic Compare & Write Unit: 1 00:17:44.608 Fused Compare & Write: Not Supported 00:17:44.608 Scatter-Gather List 00:17:44.608 SGL Command Set: Supported 00:17:44.608 SGL Keyed: Not Supported 00:17:44.608 SGL Bit Bucket Descriptor: Not Supported 00:17:44.608 SGL Metadata Pointer: Not Supported 00:17:44.608 Oversized SGL: Not Supported 00:17:44.608 SGL Metadata Address: Not Supported 00:17:44.608 SGL Offset: Supported 00:17:44.608 Transport SGL Data Block: Not Supported 00:17:44.608 Replay Protected Memory Block: Not Supported 00:17:44.608 00:17:44.608 Firmware Slot Information 00:17:44.608 ========================= 00:17:44.608 Active slot: 0 00:17:44.608 00:17:44.608 00:17:44.608 Error Log 00:17:44.608 ========= 00:17:44.608 00:17:44.608 Active Namespaces 00:17:44.608 ================= 00:17:44.608 Discovery Log Page 00:17:44.608 ================== 00:17:44.608 Generation Counter: 2 00:17:44.608 Number of Records: 2 00:17:44.608 Record Format: 0 00:17:44.608 00:17:44.608 Discovery Log Entry 0 00:17:44.608 ---------------------- 00:17:44.608 Transport Type: 3 (TCP) 00:17:44.608 Address Family: 1 (IPv4) 00:17:44.608 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:44.608 Entry Flags: 00:17:44.608 Duplicate Returned Information: 0 00:17:44.608 Explicit Persistent Connection Support for Discovery: 0 00:17:44.608 Transport Requirements: 00:17:44.608 Secure Channel: Not Specified 00:17:44.608 Port ID: 1 (0x0001) 00:17:44.608 Controller ID: 65535 (0xffff) 00:17:44.608 Admin Max SQ Size: 32 00:17:44.608 Transport Service Identifier: 4420 00:17:44.608 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:44.608 Transport Address: 10.0.0.1 00:17:44.608 Discovery Log Entry 1 00:17:44.608 ---------------------- 00:17:44.608 Transport Type: 3 (TCP) 00:17:44.608 Address Family: 1 (IPv4) 00:17:44.608 Subsystem Type: 2 (NVM Subsystem) 00:17:44.608 Entry Flags: 00:17:44.608 Duplicate Returned Information: 0 00:17:44.608 Explicit Persistent Connection Support for Discovery: 0 00:17:44.608 Transport Requirements: 00:17:44.608 Secure Channel: Not Specified 00:17:44.608 Port ID: 1 (0x0001) 00:17:44.608 Controller ID: 65535 (0xffff) 00:17:44.608 Admin Max SQ Size: 32 00:17:44.608 Transport Service Identifier: 4420 00:17:44.608 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:44.608 Transport Address: 10.0.0.1 00:17:44.608 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:44.608 get_feature(0x01) failed 00:17:44.608 get_feature(0x02) failed 00:17:44.608 get_feature(0x04) failed 00:17:44.608 ===================================================== 00:17:44.608 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:44.608 ===================================================== 00:17:44.608 Controller Capabilities/Features 00:17:44.608 ================================ 00:17:44.608 Vendor ID: 0000 00:17:44.608 Subsystem Vendor ID: 0000 00:17:44.608 Serial Number: 68ce3cf5a3329b598bb2 00:17:44.608 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:44.608 Firmware Version: 6.8.9-20 00:17:44.608 Recommended Arb Burst: 6 00:17:44.608 IEEE OUI Identifier: 00 00 00 00:17:44.608 Multi-path I/O 00:17:44.608 May have multiple subsystem ports: Yes 00:17:44.608 May have multiple controllers: Yes 00:17:44.608 Associated with SR-IOV VF: No 00:17:44.608 Max Data Transfer Size: Unlimited 00:17:44.608 Max Number of Namespaces: 1024 00:17:44.608 Max Number of I/O Queues: 128 00:17:44.608 NVMe Specification Version (VS): 1.3 00:17:44.608 NVMe Specification Version (Identify): 1.3 00:17:44.608 Maximum Queue Entries: 1024 00:17:44.608 Contiguous Queues Required: No 00:17:44.608 Arbitration Mechanisms Supported 00:17:44.608 Weighted Round Robin: Not Supported 00:17:44.608 Vendor Specific: Not Supported 00:17:44.608 Reset Timeout: 7500 ms 00:17:44.608 Doorbell Stride: 4 bytes 00:17:44.608 NVM Subsystem Reset: Not Supported 00:17:44.608 Command Sets Supported 00:17:44.608 NVM Command Set: Supported 00:17:44.608 Boot Partition: Not Supported 00:17:44.608 Memory Page Size Minimum: 4096 bytes 00:17:44.608 Memory Page Size Maximum: 4096 bytes 00:17:44.608 Persistent Memory Region: Not Supported 00:17:44.608 Optional Asynchronous Events Supported 00:17:44.608 Namespace Attribute Notices: Supported 00:17:44.608 Firmware Activation Notices: Not Supported 00:17:44.608 ANA Change Notices: Supported 00:17:44.608 PLE Aggregate Log Change Notices: Not Supported 00:17:44.608 LBA Status Info Alert Notices: Not Supported 00:17:44.608 EGE Aggregate Log Change Notices: Not Supported 00:17:44.608 Normal NVM Subsystem Shutdown event: Not Supported 00:17:44.608 Zone Descriptor Change Notices: Not Supported 00:17:44.608 Discovery Log Change Notices: Not Supported 00:17:44.608 Controller Attributes 00:17:44.608 128-bit Host Identifier: Supported 00:17:44.608 Non-Operational Permissive Mode: Not Supported 00:17:44.608 NVM Sets: Not Supported 00:17:44.608 Read Recovery Levels: Not Supported 00:17:44.608 Endurance Groups: Not Supported 00:17:44.608 Predictable Latency Mode: Not Supported 00:17:44.609 Traffic Based Keep ALive: Supported 00:17:44.609 Namespace Granularity: Not Supported 00:17:44.609 SQ Associations: Not Supported 00:17:44.609 UUID List: Not Supported 00:17:44.609 Multi-Domain Subsystem: Not Supported 00:17:44.609 Fixed Capacity Management: Not Supported 00:17:44.609 Variable Capacity Management: Not Supported 00:17:44.609 Delete Endurance Group: Not Supported 00:17:44.609 Delete NVM Set: Not Supported 00:17:44.609 Extended LBA Formats Supported: Not Supported 00:17:44.609 Flexible Data Placement Supported: Not Supported 00:17:44.609 00:17:44.609 Controller Memory Buffer Support 00:17:44.609 ================================ 00:17:44.609 Supported: No 00:17:44.609 00:17:44.609 Persistent Memory Region Support 00:17:44.609 ================================ 00:17:44.609 Supported: No 00:17:44.609 00:17:44.609 Admin Command Set Attributes 00:17:44.609 ============================ 00:17:44.609 Security Send/Receive: Not Supported 00:17:44.609 Format NVM: Not Supported 00:17:44.609 Firmware Activate/Download: Not Supported 00:17:44.609 Namespace Management: Not Supported 00:17:44.609 Device Self-Test: Not Supported 00:17:44.609 Directives: Not Supported 00:17:44.609 NVMe-MI: Not Supported 00:17:44.609 Virtualization Management: Not Supported 00:17:44.609 Doorbell Buffer Config: Not Supported 00:17:44.609 Get LBA Status Capability: Not Supported 00:17:44.609 Command & Feature Lockdown Capability: Not Supported 00:17:44.609 Abort Command Limit: 4 00:17:44.609 Async Event Request Limit: 4 00:17:44.609 Number of Firmware Slots: N/A 00:17:44.609 Firmware Slot 1 Read-Only: N/A 00:17:44.609 Firmware Activation Without Reset: N/A 00:17:44.609 Multiple Update Detection Support: N/A 00:17:44.609 Firmware Update Granularity: No Information Provided 00:17:44.609 Per-Namespace SMART Log: Yes 00:17:44.609 Asymmetric Namespace Access Log Page: Supported 00:17:44.609 ANA Transition Time : 10 sec 00:17:44.609 00:17:44.609 Asymmetric Namespace Access Capabilities 00:17:44.609 ANA Optimized State : Supported 00:17:44.609 ANA Non-Optimized State : Supported 00:17:44.609 ANA Inaccessible State : Supported 00:17:44.609 ANA Persistent Loss State : Supported 00:17:44.609 ANA Change State : Supported 00:17:44.609 ANAGRPID is not changed : No 00:17:44.609 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:44.609 00:17:44.609 ANA Group Identifier Maximum : 128 00:17:44.609 Number of ANA Group Identifiers : 128 00:17:44.609 Max Number of Allowed Namespaces : 1024 00:17:44.609 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:44.609 Command Effects Log Page: Supported 00:17:44.609 Get Log Page Extended Data: Supported 00:17:44.609 Telemetry Log Pages: Not Supported 00:17:44.609 Persistent Event Log Pages: Not Supported 00:17:44.609 Supported Log Pages Log Page: May Support 00:17:44.609 Commands Supported & Effects Log Page: Not Supported 00:17:44.609 Feature Identifiers & Effects Log Page:May Support 00:17:44.609 NVMe-MI Commands & Effects Log Page: May Support 00:17:44.609 Data Area 4 for Telemetry Log: Not Supported 00:17:44.609 Error Log Page Entries Supported: 128 00:17:44.609 Keep Alive: Supported 00:17:44.609 Keep Alive Granularity: 1000 ms 00:17:44.609 00:17:44.609 NVM Command Set Attributes 00:17:44.609 ========================== 00:17:44.609 Submission Queue Entry Size 00:17:44.609 Max: 64 00:17:44.609 Min: 64 00:17:44.609 Completion Queue Entry Size 00:17:44.609 Max: 16 00:17:44.609 Min: 16 00:17:44.609 Number of Namespaces: 1024 00:17:44.609 Compare Command: Not Supported 00:17:44.609 Write Uncorrectable Command: Not Supported 00:17:44.609 Dataset Management Command: Supported 00:17:44.609 Write Zeroes Command: Supported 00:17:44.609 Set Features Save Field: Not Supported 00:17:44.609 Reservations: Not Supported 00:17:44.609 Timestamp: Not Supported 00:17:44.609 Copy: Not Supported 00:17:44.609 Volatile Write Cache: Present 00:17:44.609 Atomic Write Unit (Normal): 1 00:17:44.609 Atomic Write Unit (PFail): 1 00:17:44.609 Atomic Compare & Write Unit: 1 00:17:44.609 Fused Compare & Write: Not Supported 00:17:44.609 Scatter-Gather List 00:17:44.609 SGL Command Set: Supported 00:17:44.609 SGL Keyed: Not Supported 00:17:44.609 SGL Bit Bucket Descriptor: Not Supported 00:17:44.609 SGL Metadata Pointer: Not Supported 00:17:44.609 Oversized SGL: Not Supported 00:17:44.609 SGL Metadata Address: Not Supported 00:17:44.609 SGL Offset: Supported 00:17:44.609 Transport SGL Data Block: Not Supported 00:17:44.609 Replay Protected Memory Block: Not Supported 00:17:44.609 00:17:44.609 Firmware Slot Information 00:17:44.609 ========================= 00:17:44.609 Active slot: 0 00:17:44.609 00:17:44.609 Asymmetric Namespace Access 00:17:44.609 =========================== 00:17:44.609 Change Count : 0 00:17:44.609 Number of ANA Group Descriptors : 1 00:17:44.609 ANA Group Descriptor : 0 00:17:44.609 ANA Group ID : 1 00:17:44.609 Number of NSID Values : 1 00:17:44.609 Change Count : 0 00:17:44.609 ANA State : 1 00:17:44.609 Namespace Identifier : 1 00:17:44.609 00:17:44.609 Commands Supported and Effects 00:17:44.609 ============================== 00:17:44.609 Admin Commands 00:17:44.609 -------------- 00:17:44.609 Get Log Page (02h): Supported 00:17:44.609 Identify (06h): Supported 00:17:44.609 Abort (08h): Supported 00:17:44.609 Set Features (09h): Supported 00:17:44.609 Get Features (0Ah): Supported 00:17:44.609 Asynchronous Event Request (0Ch): Supported 00:17:44.609 Keep Alive (18h): Supported 00:17:44.609 I/O Commands 00:17:44.609 ------------ 00:17:44.609 Flush (00h): Supported 00:17:44.609 Write (01h): Supported LBA-Change 00:17:44.609 Read (02h): Supported 00:17:44.609 Write Zeroes (08h): Supported LBA-Change 00:17:44.609 Dataset Management (09h): Supported 00:17:44.609 00:17:44.609 Error Log 00:17:44.609 ========= 00:17:44.609 Entry: 0 00:17:44.609 Error Count: 0x3 00:17:44.609 Submission Queue Id: 0x0 00:17:44.609 Command Id: 0x5 00:17:44.609 Phase Bit: 0 00:17:44.609 Status Code: 0x2 00:17:44.609 Status Code Type: 0x0 00:17:44.609 Do Not Retry: 1 00:17:44.609 Error Location: 0x28 00:17:44.609 LBA: 0x0 00:17:44.609 Namespace: 0x0 00:17:44.609 Vendor Log Page: 0x0 00:17:44.609 ----------- 00:17:44.609 Entry: 1 00:17:44.609 Error Count: 0x2 00:17:44.609 Submission Queue Id: 0x0 00:17:44.609 Command Id: 0x5 00:17:44.609 Phase Bit: 0 00:17:44.609 Status Code: 0x2 00:17:44.609 Status Code Type: 0x0 00:17:44.609 Do Not Retry: 1 00:17:44.609 Error Location: 0x28 00:17:44.609 LBA: 0x0 00:17:44.609 Namespace: 0x0 00:17:44.609 Vendor Log Page: 0x0 00:17:44.609 ----------- 00:17:44.609 Entry: 2 00:17:44.609 Error Count: 0x1 00:17:44.609 Submission Queue Id: 0x0 00:17:44.609 Command Id: 0x4 00:17:44.609 Phase Bit: 0 00:17:44.609 Status Code: 0x2 00:17:44.609 Status Code Type: 0x0 00:17:44.609 Do Not Retry: 1 00:17:44.609 Error Location: 0x28 00:17:44.609 LBA: 0x0 00:17:44.609 Namespace: 0x0 00:17:44.609 Vendor Log Page: 0x0 00:17:44.609 00:17:44.609 Number of Queues 00:17:44.609 ================ 00:17:44.609 Number of I/O Submission Queues: 128 00:17:44.609 Number of I/O Completion Queues: 128 00:17:44.609 00:17:44.609 ZNS Specific Controller Data 00:17:44.609 ============================ 00:17:44.609 Zone Append Size Limit: 0 00:17:44.609 00:17:44.609 00:17:44.609 Active Namespaces 00:17:44.609 ================= 00:17:44.609 get_feature(0x05) failed 00:17:44.609 Namespace ID:1 00:17:44.609 Command Set Identifier: NVM (00h) 00:17:44.609 Deallocate: Supported 00:17:44.609 Deallocated/Unwritten Error: Not Supported 00:17:44.609 Deallocated Read Value: Unknown 00:17:44.609 Deallocate in Write Zeroes: Not Supported 00:17:44.609 Deallocated Guard Field: 0xFFFF 00:17:44.609 Flush: Supported 00:17:44.609 Reservation: Not Supported 00:17:44.609 Namespace Sharing Capabilities: Multiple Controllers 00:17:44.609 Size (in LBAs): 1310720 (5GiB) 00:17:44.609 Capacity (in LBAs): 1310720 (5GiB) 00:17:44.609 Utilization (in LBAs): 1310720 (5GiB) 00:17:44.609 UUID: 75c6d631-1c0a-47e2-9789-edfe1665d175 00:17:44.609 Thin Provisioning: Not Supported 00:17:44.609 Per-NS Atomic Units: Yes 00:17:44.609 Atomic Boundary Size (Normal): 0 00:17:44.609 Atomic Boundary Size (PFail): 0 00:17:44.609 Atomic Boundary Offset: 0 00:17:44.609 NGUID/EUI64 Never Reused: No 00:17:44.609 ANA group ID: 1 00:17:44.609 Namespace Write Protected: No 00:17:44.609 Number of LBA Formats: 1 00:17:44.609 Current LBA Format: LBA Format #00 00:17:44.609 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:44.609 00:17:44.609 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:44.609 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:44.610 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:44.868 rmmod nvme_tcp 00:17:44.868 rmmod nvme_fabrics 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:44.868 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:45.126 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:45.126 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.126 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:45.126 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.126 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:17:45.126 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:45.126 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:45.126 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:17:45.126 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:45.126 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:45.126 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:45.126 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:45.126 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:17:45.126 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:17:45.126 08:28:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:45.692 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:45.950 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:45.950 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:45.950 ************************************ 00:17:45.950 END TEST nvmf_identify_kernel_target 00:17:45.950 ************************************ 00:17:45.950 00:17:45.950 real 0m3.251s 00:17:45.950 user 0m1.179s 00:17:45.950 sys 0m1.460s 00:17:45.950 08:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:45.950 08:28:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.950 08:28:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:45.950 08:28:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:45.950 08:28:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:45.950 08:28:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.950 ************************************ 00:17:45.950 START TEST nvmf_auth_host 00:17:45.950 ************************************ 00:17:45.950 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:46.209 * Looking for test storage... 00:17:46.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:46.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.209 --rc genhtml_branch_coverage=1 00:17:46.209 --rc genhtml_function_coverage=1 00:17:46.209 --rc genhtml_legend=1 00:17:46.209 --rc geninfo_all_blocks=1 00:17:46.209 --rc geninfo_unexecuted_blocks=1 00:17:46.209 00:17:46.209 ' 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:46.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.209 --rc genhtml_branch_coverage=1 00:17:46.209 --rc genhtml_function_coverage=1 00:17:46.209 --rc genhtml_legend=1 00:17:46.209 --rc geninfo_all_blocks=1 00:17:46.209 --rc geninfo_unexecuted_blocks=1 00:17:46.209 00:17:46.209 ' 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:46.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.209 --rc genhtml_branch_coverage=1 00:17:46.209 --rc genhtml_function_coverage=1 00:17:46.209 --rc genhtml_legend=1 00:17:46.209 --rc geninfo_all_blocks=1 00:17:46.209 --rc geninfo_unexecuted_blocks=1 00:17:46.209 00:17:46.209 ' 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:46.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.209 --rc genhtml_branch_coverage=1 00:17:46.209 --rc genhtml_function_coverage=1 00:17:46.209 --rc genhtml_legend=1 00:17:46.209 --rc geninfo_all_blocks=1 00:17:46.209 --rc geninfo_unexecuted_blocks=1 00:17:46.209 00:17:46.209 ' 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.209 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:46.210 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # nvmf_veth_init 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:46.210 Cannot find device "nvmf_init_br" 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:46.210 Cannot find device "nvmf_init_br2" 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:46.210 Cannot find device "nvmf_tgt_br" 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:46.210 Cannot find device "nvmf_tgt_br2" 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:46.210 Cannot find device "nvmf_init_br" 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:46.210 Cannot find device "nvmf_init_br2" 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:17:46.210 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:46.467 Cannot find device "nvmf_tgt_br" 00:17:46.467 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:17:46.467 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:46.467 Cannot find device "nvmf_tgt_br2" 00:17:46.467 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:17:46.468 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:46.468 Cannot find device "nvmf_br" 00:17:46.468 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:17:46.468 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:46.468 Cannot find device "nvmf_init_if" 00:17:46.468 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:17:46.468 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:46.468 Cannot find device "nvmf_init_if2" 00:17:46.468 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:17:46.468 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:46.468 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.468 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:17:46.468 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:46.468 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.468 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:17:46.468 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:46.468 08:28:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:46.468 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:46.726 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:46.726 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:17:46.726 00:17:46.726 --- 10.0.0.3 ping statistics --- 00:17:46.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.726 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:46.726 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:46.726 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:17:46.726 00:17:46.726 --- 10.0.0.4 ping statistics --- 00:17:46.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.726 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:46.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:46.726 00:17:46.726 --- 10.0.0.1 ping statistics --- 00:17:46.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.726 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:46.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:17:46.726 00:17:46.726 --- 10.0.0.2 ping statistics --- 00:17:46.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.726 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # return 0 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=78618 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 78618 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 78618 ']' 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:46.726 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.985 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:46.985 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:17:46.985 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:46.985 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:46.985 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=a960f09df83ed5377968435434b0edff 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.SFH 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key a960f09df83ed5377968435434b0edff 0 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 a960f09df83ed5377968435434b0edff 0 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=a960f09df83ed5377968435434b0edff 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.SFH 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.SFH 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.SFH 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=7cc8e1f80bd8e4a821e4bbd56d4d02cac3a99891d326a02c161f56b2d76f4084 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.7jM 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 7cc8e1f80bd8e4a821e4bbd56d4d02cac3a99891d326a02c161f56b2d76f4084 3 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 7cc8e1f80bd8e4a821e4bbd56d4d02cac3a99891d326a02c161f56b2d76f4084 3 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=7cc8e1f80bd8e4a821e4bbd56d4d02cac3a99891d326a02c161f56b2d76f4084 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.7jM 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.7jM 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.7jM 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=2a7e9c10cb0a07e6b7a1b8ab88ac7d64d8794c95f6ff55ca 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.cu7 00:17:47.247 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 2a7e9c10cb0a07e6b7a1b8ab88ac7d64d8794c95f6ff55ca 0 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 2a7e9c10cb0a07e6b7a1b8ab88ac7d64d8794c95f6ff55ca 0 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=2a7e9c10cb0a07e6b7a1b8ab88ac7d64d8794c95f6ff55ca 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.cu7 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.cu7 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.cu7 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=33c14cd7daad24f584bfcae08774f510b6634acb8006c669 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.DPM 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 33c14cd7daad24f584bfcae08774f510b6634acb8006c669 2 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 33c14cd7daad24f584bfcae08774f510b6634acb8006c669 2 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=33c14cd7daad24f584bfcae08774f510b6634acb8006c669 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:17:47.248 08:28:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:47.517 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.DPM 00:17:47.517 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.DPM 00:17:47.517 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.DPM 00:17:47.517 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:47.517 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:47.517 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=ef69f4f08f9a502254c8c3d8b2dd30ea 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.tDE 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key ef69f4f08f9a502254c8c3d8b2dd30ea 1 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 ef69f4f08f9a502254c8c3d8b2dd30ea 1 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=ef69f4f08f9a502254c8c3d8b2dd30ea 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.tDE 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.tDE 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.tDE 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=70a088eb4aff12d488ed43add31a8be8 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.djF 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 70a088eb4aff12d488ed43add31a8be8 1 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 70a088eb4aff12d488ed43add31a8be8 1 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=70a088eb4aff12d488ed43add31a8be8 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.djF 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.djF 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.djF 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=0607c8d260754ebe84e6c91cddb43ca9ee0390b44d163e26 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.6Je 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 0607c8d260754ebe84e6c91cddb43ca9ee0390b44d163e26 2 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 0607c8d260754ebe84e6c91cddb43ca9ee0390b44d163e26 2 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=0607c8d260754ebe84e6c91cddb43ca9ee0390b44d163e26 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.6Je 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.6Je 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.6Je 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=4f23b14081d8c56613442a7c90c48b3e 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.O71 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 4f23b14081d8c56613442a7c90c48b3e 0 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 4f23b14081d8c56613442a7c90c48b3e 0 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=4f23b14081d8c56613442a7c90c48b3e 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:17:47.518 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.O71 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.O71 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.O71 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=692e3f46391d3134c90da6ffa7c3065e02d29e8e9f512db37d732d6820316ecb 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.OnM 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 692e3f46391d3134c90da6ffa7c3065e02d29e8e9f512db37d732d6820316ecb 3 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 692e3f46391d3134c90da6ffa7c3065e02d29e8e9f512db37d732d6820316ecb 3 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=692e3f46391d3134c90da6ffa7c3065e02d29e8e9f512db37d732d6820316ecb 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.OnM 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.OnM 00:17:47.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.OnM 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78618 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 78618 ']' 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:47.777 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.SFH 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.7jM ]] 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7jM 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.cu7 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.DPM ]] 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.DPM 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.tDE 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.djF ]] 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.djF 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.6Je 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.O71 ]] 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.O71 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.OnM 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:17:48.036 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:48.037 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:48.037 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:48.037 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:17:48.037 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:17:48.037 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:17:48.295 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:48.295 08:28:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:48.554 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:48.554 Waiting for block devices as requested 00:17:48.554 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:48.554 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:49.120 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:49.120 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:49.120 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:17:49.120 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:17:49.120 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:49.120 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:49.120 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:17:49.120 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:49.120 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:49.378 No valid GPT data, bailing 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n2 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n2 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:49.378 No valid GPT data, bailing 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n2 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n3 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n3 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:49.378 08:28:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:49.378 No valid GPT data, bailing 00:17:49.378 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:49.379 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:49.379 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:49.379 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n3 00:17:49.379 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:17:49.379 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:49.379 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme1n1 00:17:49.379 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:17:49.379 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:49.379 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:17:49.379 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme1n1 00:17:49.379 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:49.379 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:49.637 No valid GPT data, bailing 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme1n1 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme1n1 ]] 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme1n1 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -a 10.0.0.1 -t tcp -s 4420 00:17:49.637 00:17:49.637 Discovery Log Number of Records 2, Generation counter 2 00:17:49.637 =====Discovery Log Entry 0====== 00:17:49.637 trtype: tcp 00:17:49.637 adrfam: ipv4 00:17:49.637 subtype: current discovery subsystem 00:17:49.637 treq: not specified, sq flow control disable supported 00:17:49.637 portid: 1 00:17:49.637 trsvcid: 4420 00:17:49.637 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:49.637 traddr: 10.0.0.1 00:17:49.637 eflags: none 00:17:49.637 sectype: none 00:17:49.637 =====Discovery Log Entry 1====== 00:17:49.637 trtype: tcp 00:17:49.637 adrfam: ipv4 00:17:49.637 subtype: nvme subsystem 00:17:49.637 treq: not specified, sq flow control disable supported 00:17:49.637 portid: 1 00:17:49.637 trsvcid: 4420 00:17:49.637 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:49.637 traddr: 10.0.0.1 00:17:49.637 eflags: none 00:17:49.637 sectype: none 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: ]] 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.637 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.638 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.638 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:49.638 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:49.638 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:49.638 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.638 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.638 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:49.638 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.638 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:49.638 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:49.638 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:49.638 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.638 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.638 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.896 nvme0n1 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:17:49.896 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: ]] 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.897 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.156 nvme0n1 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: ]] 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.156 nvme0n1 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.156 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: ]] 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.415 08:28:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.415 nvme0n1 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: ]] 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.415 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:50.416 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:50.416 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:50.416 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.416 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:50.416 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.416 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.416 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.416 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.416 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:50.416 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:50.416 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:50.416 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.416 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.416 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:50.416 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.416 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:50.416 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:50.416 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:50.416 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:50.416 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.416 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.674 nvme0n1 00:17:50.674 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.674 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.674 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.674 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.674 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.674 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.674 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.674 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.674 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.674 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.674 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.674 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.674 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:50.674 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.674 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:50.674 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:50.674 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:50.674 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:17:50.674 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:50.674 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.675 nvme0n1 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.675 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.934 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.934 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:50.934 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.934 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.934 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.934 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.934 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:50.934 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:50.934 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:50.934 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:50.934 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:50.934 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:50.934 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:17:50.934 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:17:50.934 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:50.934 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:51.192 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:17:51.192 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: ]] 00:17:51.192 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.193 nvme0n1 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.193 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: ]] 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.453 08:28:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.453 nvme0n1 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: ]] 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:51.453 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.454 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.454 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.454 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.454 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:51.454 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:51.454 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:51.454 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.454 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.454 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:51.454 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.454 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:51.454 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:51.454 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:51.454 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.454 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.454 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.729 nvme0n1 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: ]] 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.729 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.988 nvme0n1 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.988 nvme0n1 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.988 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.247 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.247 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.247 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.247 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.247 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.247 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.247 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.247 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:52.247 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.247 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:52.247 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:52.247 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:52.247 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:17:52.247 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:17:52.247 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:52.247 08:28:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: ]] 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.813 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.072 nvme0n1 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: ]] 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.072 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.331 nvme0n1 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: ]] 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:53.331 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.332 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:53.332 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:53.332 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:53.332 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.332 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:53.332 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.332 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.332 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.332 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.332 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:53.332 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:53.332 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:53.332 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.332 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.332 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:53.332 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.332 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:53.332 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:53.332 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:53.332 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.332 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.332 08:28:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.590 nvme0n1 00:17:53.590 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.590 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.590 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.590 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.590 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.590 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.590 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.590 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.590 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.590 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.590 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.590 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.590 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:53.590 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.590 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:53.590 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:53.590 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:53.590 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:17:53.590 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: ]] 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.591 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.849 nvme0n1 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.849 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.108 nvme0n1 00:17:54.108 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.108 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.108 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.108 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.108 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.108 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.108 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.108 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.108 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.108 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.108 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.108 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.108 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.108 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:54.108 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.108 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:54.108 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:54.108 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:54.108 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:17:54.108 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:17:54.108 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:54.108 08:28:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: ]] 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.007 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.266 nvme0n1 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: ]] 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.266 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.267 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:56.267 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.267 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:56.267 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:56.267 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:56.267 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.267 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.267 08:28:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.525 nvme0n1 00:17:56.525 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.525 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:56.525 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:56.525 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.525 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.525 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: ]] 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:56.783 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:56.784 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:56.784 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.784 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.784 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.042 nvme0n1 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: ]] 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.042 08:28:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.609 nvme0n1 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.609 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.867 nvme0n1 00:17:57.867 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.867 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:57.867 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:57.867 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.867 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.868 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.126 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.126 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:58.126 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.126 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.126 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.126 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.126 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:58.126 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:58.126 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:58.126 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:58.126 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:58.126 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:58.126 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:17:58.126 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:17:58.126 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:58.126 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:58.126 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:17:58.126 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: ]] 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.127 08:28:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.694 nvme0n1 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: ]] 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.694 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.260 nvme0n1 00:17:59.260 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.260 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:59.260 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.260 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:59.260 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.260 08:29:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: ]] 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.518 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.085 nvme0n1 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: ]] 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.085 08:29:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.650 nvme0n1 00:18:00.650 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.650 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:00.650 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:00.650 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.650 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.650 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.909 08:29:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.476 nvme0n1 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: ]] 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.476 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.736 nvme0n1 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: ]] 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.736 nvme0n1 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.736 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.995 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.995 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:01.995 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:18:01.995 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:01.995 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:01.995 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:01.995 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:01.995 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:01.995 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:01.995 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:01.995 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:01.995 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:01.995 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: ]] 00:18:01.995 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:01.995 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:18:01.995 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.996 nvme0n1 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: ]] 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.996 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.255 nvme0n1 00:18:02.255 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.255 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.255 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.255 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.256 nvme0n1 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.256 08:29:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: ]] 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.516 nvme0n1 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.516 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.775 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.775 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.775 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:18:02.775 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.775 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:02.775 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:02.775 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:02.775 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: ]] 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.776 nvme0n1 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: ]] 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.776 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.035 nvme0n1 00:18:03.035 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.035 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.035 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.035 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.035 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: ]] 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.036 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.295 nvme0n1 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.295 08:29:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.295 nvme0n1 00:18:03.295 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.295 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.295 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.295 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.295 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: ]] 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.554 nvme0n1 00:18:03.554 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.813 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.813 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.813 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.813 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.813 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.813 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.813 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.813 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.813 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.813 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.813 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.813 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:18:03.813 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.813 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:03.813 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:03.813 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:03.813 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:03.813 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:03.813 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:03.813 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:03.813 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: ]] 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.814 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.095 nvme0n1 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: ]] 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.095 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.354 nvme0n1 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: ]] 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.354 08:29:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.613 nvme0n1 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.613 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.872 nvme0n1 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: ]] 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.872 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.131 nvme0n1 00:18:05.131 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.131 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:05.131 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.131 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:05.131 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: ]] 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.391 08:29:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.651 nvme0n1 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: ]] 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:05.651 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:05.910 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:05.910 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:05.910 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:05.910 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:05.910 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:05.910 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:05.910 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:05.910 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:05.910 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.910 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.910 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.168 nvme0n1 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: ]] 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:06.168 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:06.169 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:06.169 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:06.169 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:06.169 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:06.169 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:06.169 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:06.169 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:06.169 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:06.169 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.169 08:29:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.426 nvme0n1 00:18:06.426 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.426 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:06.426 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:06.426 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.426 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.684 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.943 nvme0n1 00:18:06.943 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.943 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:06.943 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: ]] 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.944 08:29:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.882 nvme0n1 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: ]] 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.882 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.450 nvme0n1 00:18:08.450 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.450 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:08.450 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.450 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.450 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:08.450 08:29:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: ]] 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.450 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.017 nvme0n1 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: ]] 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.017 08:29:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.955 nvme0n1 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:09.955 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:09.956 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:09.956 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:09.956 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.956 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.956 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.956 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:09.956 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:09.956 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:09.956 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:09.956 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:09.956 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:09.956 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:09.956 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:09.956 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:09.956 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:09.956 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:09.956 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:09.956 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.956 08:29:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.523 nvme0n1 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: ]] 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:18:10.523 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.524 nvme0n1 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.524 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.783 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.783 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: ]] 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.784 nvme0n1 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: ]] 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.784 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.043 nvme0n1 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: ]] 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.043 nvme0n1 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.043 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.304 nvme0n1 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.304 08:29:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.304 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.304 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:11.304 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.304 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.304 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: ]] 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.305 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.565 nvme0n1 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: ]] 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.565 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.824 nvme0n1 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: ]] 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:11.824 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:11.825 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:11.825 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:11.825 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:11.825 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:11.825 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:11.825 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.825 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.825 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.084 nvme0n1 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: ]] 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.084 nvme0n1 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.084 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.085 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.344 nvme0n1 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:12.344 08:29:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.344 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.344 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.344 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.344 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:12.344 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.344 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.344 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.344 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.344 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:12.344 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:18:12.344 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:12.344 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:12.344 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:12.344 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:12.344 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:18:12.344 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:18:12.344 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:12.344 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:12.344 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:18:12.344 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: ]] 00:18:12.345 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:18:12.345 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:18:12.345 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:12.345 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:12.345 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:12.345 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:12.345 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:12.345 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:12.345 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.345 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.603 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.604 nvme0n1 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.604 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: ]] 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:12.862 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:12.863 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:12.863 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.863 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.863 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.863 nvme0n1 00:18:12.863 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.863 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.863 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.863 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.863 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:12.863 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: ]] 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.121 nvme0n1 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:13.121 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:13.122 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.122 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.122 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: ]] 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.380 08:29:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.380 nvme0n1 00:18:13.380 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.380 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:13.380 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.380 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.380 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:13.380 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.640 nvme0n1 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.640 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: ]] 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.899 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.900 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:13.900 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.900 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:13.900 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:13.900 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:13.900 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.900 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.900 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.158 nvme0n1 00:18:14.158 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.158 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:14.158 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:14.158 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.158 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.158 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.158 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.158 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:14.158 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.158 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.158 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.158 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:14.158 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:18:14.158 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:14.158 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:14.158 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:14.158 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:14.158 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:14.158 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: ]] 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.159 08:29:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.727 nvme0n1 00:18:14.727 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.727 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:14.727 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:14.727 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.727 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.727 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.727 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.727 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:14.727 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.727 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.727 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: ]] 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.728 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.987 nvme0n1 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: ]] 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.987 08:29:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.556 nvme0n1 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:15.556 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:15.557 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:15.557 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.557 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.815 nvme0n1 00:18:15.815 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.815 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:15.815 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:15.815 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.815 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.815 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.815 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.815 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:15.815 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.815 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.815 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.815 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.815 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:15.815 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:18:15.815 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:15.815 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTk2MGYwOWRmODNlZDUzNzc5Njg0MzU0MzRiMGVkZmasI+NX: 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: ]] 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2NjOGUxZjgwYmQ4ZTRhODIxZTRiYmQ1NmQ0ZDAyY2FjM2E5OTg5MWQzMjZhMDJjMTYxZjU2YjJkNzZmNDA4NI1PaoE=: 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.816 08:29:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.784 nvme0n1 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: ]] 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.784 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.399 nvme0n1 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: ]] 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.399 08:29:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.967 nvme0n1 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDYwN2M4ZDI2MDc1NGViZTg0ZTZjOTFjZGRiNDNjYTllZTAzOTBiNDRkMTYzZTI2zEkqlw==: 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: ]] 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyM2IxNDA4MWQ4YzU2NjEzNDQyYTdjOTBjNDhiM2U84Wgk: 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:17.967 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:17.968 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:17.968 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:17.968 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:17.968 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:17.968 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:17.968 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:17.968 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:17.968 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:17.968 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.968 08:29:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.535 nvme0n1 00:18:18.535 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjkyZTNmNDYzOTFkMzEzNGM5MGRhNmZmYTdjMzA2NWUwMmQyOWU4ZTlmNTEyZGIzN2Q3MzJkNjgyMDMxNmVjYvYcn7w=: 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.536 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.476 nvme0n1 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: ]] 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:19.476 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:19.477 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:18:19.477 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:19.477 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:19.477 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.477 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:19.477 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.477 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:19.477 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.477 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.477 request: 00:18:19.477 { 00:18:19.477 "name": "nvme0", 00:18:19.477 "trtype": "tcp", 00:18:19.477 "traddr": "10.0.0.1", 00:18:19.477 "adrfam": "ipv4", 00:18:19.477 "trsvcid": "4420", 00:18:19.477 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:19.477 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:19.477 "prchk_reftag": false, 00:18:19.477 "prchk_guard": false, 00:18:19.477 "hdgst": false, 00:18:19.477 "ddgst": false, 00:18:19.477 "allow_unrecognized_csi": false, 00:18:19.477 "method": "bdev_nvme_attach_controller", 00:18:19.477 "req_id": 1 00:18:19.477 } 00:18:19.477 Got JSON-RPC error response 00:18:19.477 response: 00:18:19.477 { 00:18:19.477 "code": -5, 00:18:19.477 "message": "Input/output error" 00:18:19.477 } 00:18:19.477 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:19.477 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:18:19.477 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:19.477 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:19.477 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:19.477 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:18:19.477 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:18:19.477 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.477 08:29:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.477 request: 00:18:19.477 { 00:18:19.477 "name": "nvme0", 00:18:19.477 "trtype": "tcp", 00:18:19.477 "traddr": "10.0.0.1", 00:18:19.477 "adrfam": "ipv4", 00:18:19.477 "trsvcid": "4420", 00:18:19.477 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:19.477 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:19.477 "prchk_reftag": false, 00:18:19.477 "prchk_guard": false, 00:18:19.477 "hdgst": false, 00:18:19.477 "ddgst": false, 00:18:19.477 "dhchap_key": "key2", 00:18:19.477 "allow_unrecognized_csi": false, 00:18:19.477 "method": "bdev_nvme_attach_controller", 00:18:19.477 "req_id": 1 00:18:19.477 } 00:18:19.477 Got JSON-RPC error response 00:18:19.477 response: 00:18:19.477 { 00:18:19.477 "code": -5, 00:18:19.477 "message": "Input/output error" 00:18:19.477 } 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.477 request: 00:18:19.477 { 00:18:19.477 "name": "nvme0", 00:18:19.477 "trtype": "tcp", 00:18:19.477 "traddr": "10.0.0.1", 00:18:19.477 "adrfam": "ipv4", 00:18:19.477 "trsvcid": "4420", 00:18:19.477 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:19.477 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:19.477 "prchk_reftag": false, 00:18:19.477 "prchk_guard": false, 00:18:19.477 "hdgst": false, 00:18:19.477 "ddgst": false, 00:18:19.477 "dhchap_key": "key1", 00:18:19.477 "dhchap_ctrlr_key": "ckey2", 00:18:19.477 "allow_unrecognized_csi": false, 00:18:19.477 "method": "bdev_nvme_attach_controller", 00:18:19.477 "req_id": 1 00:18:19.477 } 00:18:19.477 Got JSON-RPC error response 00:18:19.477 response: 00:18:19.477 { 00:18:19.477 "code": -5, 00:18:19.477 "message": "Input/output error" 00:18:19.477 } 00:18:19.477 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:19.478 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:18:19.478 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:19.478 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:19.478 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:19.478 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:18:19.478 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:19.478 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:19.478 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:19.478 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:19.478 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:19.478 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:19.478 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:19.478 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:19.478 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:19.478 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:19.478 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:19.478 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.478 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.773 nvme0n1 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: ]] 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.773 request: 00:18:19.773 { 00:18:19.773 "name": "nvme0", 00:18:19.773 "dhchap_key": "key1", 00:18:19.773 "dhchap_ctrlr_key": "ckey2", 00:18:19.773 "method": "bdev_nvme_set_keys", 00:18:19.773 "req_id": 1 00:18:19.773 } 00:18:19.773 Got JSON-RPC error response 00:18:19.773 response: 00:18:19.773 { 00:18:19.773 "code": -13, 00:18:19.773 "message": "Permission denied" 00:18:19.773 } 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:18:19.773 08:29:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmE3ZTljMTBjYjBhMDdlNmI3YTFiOGFiODhhYzdkNjRkODc5NGM5NWY2ZmY1NWNhbMFz8A==: 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: ]] 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzNjMTRjZDdkYWFkMjRmNTg0YmZjYWUwODc3NGY1MTBiNjYzNGFjYjgwMDZjNjY5xeefeg==: 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.150 nvme0n1 00:18:21.150 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZWY2OWY0ZjA4ZjlhNTAyMjU0YzhjM2Q4YjJkZDMwZWEi8Xr8: 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: ]] 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzBhMDg4ZWI0YWZmMTJkNDg4ZWQ0M2FkZDMxYThiZThQSAd4: 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.151 request: 00:18:21.151 { 00:18:21.151 "name": "nvme0", 00:18:21.151 "dhchap_key": "key2", 00:18:21.151 "dhchap_ctrlr_key": "ckey1", 00:18:21.151 "method": "bdev_nvme_set_keys", 00:18:21.151 "req_id": 1 00:18:21.151 } 00:18:21.151 Got JSON-RPC error response 00:18:21.151 response: 00:18:21.151 { 00:18:21.151 "code": -13, 00:18:21.151 "message": "Permission denied" 00:18:21.151 } 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:18:21.151 08:29:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:22.086 rmmod nvme_tcp 00:18:22.086 rmmod nvme_fabrics 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 78618 ']' 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 78618 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 78618 ']' 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 78618 00:18:22.086 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:18:22.345 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:22.345 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78618 00:18:22.345 killing process with pid 78618 00:18:22.345 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:22.345 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:22.345 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78618' 00:18:22.345 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 78618 00:18:22.345 08:29:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 78618 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:22.603 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.862 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:18:22.862 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:22.862 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:22.862 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:18:22.862 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:18:22.862 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:18:22.862 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:22.862 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:22.862 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:22.862 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:22.862 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:18:22.862 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:18:22.862 08:29:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:23.428 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:23.687 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:23.687 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:23.687 08:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.SFH /tmp/spdk.key-null.cu7 /tmp/spdk.key-sha256.tDE /tmp/spdk.key-sha384.6Je /tmp/spdk.key-sha512.OnM /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:18:23.687 08:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:23.945 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:24.202 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:24.202 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:24.202 00:18:24.202 real 0m38.109s 00:18:24.202 user 0m34.335s 00:18:24.202 sys 0m4.143s 00:18:24.202 08:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:24.202 ************************************ 00:18:24.202 END TEST nvmf_auth_host 00:18:24.202 ************************************ 00:18:24.202 08:29:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.202 08:29:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:18:24.202 08:29:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:24.202 08:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:24.202 08:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:24.202 08:29:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:24.202 ************************************ 00:18:24.202 START TEST nvmf_digest 00:18:24.202 ************************************ 00:18:24.202 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:24.202 * Looking for test storage... 00:18:24.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:24.202 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:24.202 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:18:24.202 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:24.461 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:24.462 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:18:24.462 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:24.462 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:24.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.462 --rc genhtml_branch_coverage=1 00:18:24.462 --rc genhtml_function_coverage=1 00:18:24.462 --rc genhtml_legend=1 00:18:24.462 --rc geninfo_all_blocks=1 00:18:24.462 --rc geninfo_unexecuted_blocks=1 00:18:24.462 00:18:24.462 ' 00:18:24.462 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:24.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.462 --rc genhtml_branch_coverage=1 00:18:24.462 --rc genhtml_function_coverage=1 00:18:24.462 --rc genhtml_legend=1 00:18:24.462 --rc geninfo_all_blocks=1 00:18:24.462 --rc geninfo_unexecuted_blocks=1 00:18:24.462 00:18:24.462 ' 00:18:24.462 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:24.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.462 --rc genhtml_branch_coverage=1 00:18:24.462 --rc genhtml_function_coverage=1 00:18:24.462 --rc genhtml_legend=1 00:18:24.462 --rc geninfo_all_blocks=1 00:18:24.462 --rc geninfo_unexecuted_blocks=1 00:18:24.462 00:18:24.462 ' 00:18:24.462 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:24.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.462 --rc genhtml_branch_coverage=1 00:18:24.462 --rc genhtml_function_coverage=1 00:18:24.462 --rc genhtml_legend=1 00:18:24.462 --rc geninfo_all_blocks=1 00:18:24.462 --rc geninfo_unexecuted_blocks=1 00:18:24.462 00:18:24.462 ' 00:18:24.462 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:24.462 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:18:24.462 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.462 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.462 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.462 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.462 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.462 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.462 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.462 08:29:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:24.462 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@458 -- # nvmf_veth_init 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:24.462 Cannot find device "nvmf_init_br" 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:24.462 Cannot find device "nvmf_init_br2" 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:24.462 Cannot find device "nvmf_tgt_br" 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:24.462 Cannot find device "nvmf_tgt_br2" 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:24.462 Cannot find device "nvmf_init_br" 00:18:24.462 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:18:24.463 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:24.463 Cannot find device "nvmf_init_br2" 00:18:24.463 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:18:24.463 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:24.463 Cannot find device "nvmf_tgt_br" 00:18:24.463 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:18:24.463 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:24.463 Cannot find device "nvmf_tgt_br2" 00:18:24.463 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:18:24.463 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:24.463 Cannot find device "nvmf_br" 00:18:24.463 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:18:24.463 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:24.463 Cannot find device "nvmf_init_if" 00:18:24.463 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:18:24.463 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:24.463 Cannot find device "nvmf_init_if2" 00:18:24.463 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:18:24.463 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:24.463 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:24.463 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:18:24.463 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:24.463 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:24.463 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:18:24.463 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:24.463 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:24.721 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:24.721 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:24.721 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:24.721 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:24.721 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:24.721 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:24.721 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:24.721 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:24.721 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:24.721 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:24.721 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:24.721 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:24.721 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:24.721 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:24.721 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:24.721 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:24.721 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:24.721 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:24.721 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:24.721 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:24.721 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:24.721 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:24.722 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:24.722 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:18:24.722 00:18:24.722 --- 10.0.0.3 ping statistics --- 00:18:24.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.722 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:24.722 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:24.722 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:18:24.722 00:18:24.722 --- 10.0.0.4 ping statistics --- 00:18:24.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.722 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:24.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:24.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:18:24.722 00:18:24.722 --- 10.0.0.1 ping statistics --- 00:18:24.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.722 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:24.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:24.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:18:24.722 00:18:24.722 --- 10.0.0.2 ping statistics --- 00:18:24.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.722 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # return 0 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:24.722 ************************************ 00:18:24.722 START TEST nvmf_digest_clean 00:18:24.722 ************************************ 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=80270 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 80270 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 80270 ']' 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:24.722 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:24.980 [2024-10-15 08:29:26.490411] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:18:24.980 [2024-10-15 08:29:26.490524] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.980 [2024-10-15 08:29:26.632165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.980 [2024-10-15 08:29:26.709344] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.980 [2024-10-15 08:29:26.709429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.980 [2024-10-15 08:29:26.709455] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.980 [2024-10-15 08:29:26.709466] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.980 [2024-10-15 08:29:26.709475] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.980 [2024-10-15 08:29:26.710043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.238 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:25.238 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:18:25.238 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:25.238 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:25.238 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:25.238 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.238 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:18:25.238 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:18:25.238 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:18:25.238 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.238 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:25.238 [2024-10-15 08:29:26.887510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:25.238 null0 00:18:25.238 [2024-10-15 08:29:26.952270] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.510 [2024-10-15 08:29:26.976414] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:25.510 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.510 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:18:25.510 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:25.510 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:25.510 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:25.510 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:25.510 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:25.510 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:25.510 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80296 00:18:25.510 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80296 /var/tmp/bperf.sock 00:18:25.510 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:25.510 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 80296 ']' 00:18:25.510 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:25.510 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:25.510 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:25.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:25.511 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:25.511 08:29:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:25.511 [2024-10-15 08:29:27.045388] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:18:25.511 [2024-10-15 08:29:27.045516] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80296 ] 00:18:25.511 [2024-10-15 08:29:27.185010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.776 [2024-10-15 08:29:27.263865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.776 08:29:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:25.776 08:29:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:18:25.776 08:29:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:25.776 08:29:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:25.776 08:29:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:26.034 [2024-10-15 08:29:27.653739] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:26.034 08:29:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:26.034 08:29:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:26.292 nvme0n1 00:18:26.550 08:29:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:26.550 08:29:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:26.550 Running I/O for 2 seconds... 00:18:28.860 15113.00 IOPS, 59.04 MiB/s [2024-10-15T08:29:30.591Z] 15430.50 IOPS, 60.28 MiB/s 00:18:28.860 Latency(us) 00:18:28.860 [2024-10-15T08:29:30.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.860 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:28.860 nvme0n1 : 2.00 15456.64 60.38 0.00 0.00 8275.12 7298.33 21209.83 00:18:28.860 [2024-10-15T08:29:30.591Z] =================================================================================================================== 00:18:28.860 [2024-10-15T08:29:30.591Z] Total : 15456.64 60.38 0.00 0.00 8275.12 7298.33 21209.83 00:18:28.860 { 00:18:28.860 "results": [ 00:18:28.860 { 00:18:28.860 "job": "nvme0n1", 00:18:28.860 "core_mask": "0x2", 00:18:28.860 "workload": "randread", 00:18:28.860 "status": "finished", 00:18:28.860 "queue_depth": 128, 00:18:28.860 "io_size": 4096, 00:18:28.860 "runtime": 2.004899, 00:18:28.860 "iops": 15456.638962860474, 00:18:28.860 "mibps": 60.377495948673726, 00:18:28.860 "io_failed": 0, 00:18:28.860 "io_timeout": 0, 00:18:28.860 "avg_latency_us": 8275.119436515597, 00:18:28.860 "min_latency_us": 7298.327272727272, 00:18:28.860 "max_latency_us": 21209.832727272726 00:18:28.860 } 00:18:28.860 ], 00:18:28.860 "core_count": 1 00:18:28.860 } 00:18:28.860 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:28.860 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:28.860 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:28.860 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:28.860 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:28.860 | select(.opcode=="crc32c") 00:18:28.860 | "\(.module_name) \(.executed)"' 00:18:28.860 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:28.860 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:28.860 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:28.860 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:28.860 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80296 00:18:28.860 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 80296 ']' 00:18:28.860 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 80296 00:18:28.860 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:18:28.860 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:28.860 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80296 00:18:28.860 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:28.860 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:28.860 killing process with pid 80296 00:18:28.860 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80296' 00:18:28.860 Received shutdown signal, test time was about 2.000000 seconds 00:18:28.860 00:18:28.860 Latency(us) 00:18:28.860 [2024-10-15T08:29:30.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.860 [2024-10-15T08:29:30.591Z] =================================================================================================================== 00:18:28.860 [2024-10-15T08:29:30.591Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:28.860 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 80296 00:18:28.860 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 80296 00:18:29.120 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:18:29.120 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:29.120 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:29.120 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:29.120 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:29.120 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:29.120 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:29.120 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80349 00:18:29.120 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80349 /var/tmp/bperf.sock 00:18:29.120 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:29.120 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 80349 ']' 00:18:29.120 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:29.120 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:29.120 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:29.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:29.120 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:29.120 08:29:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:29.378 [2024-10-15 08:29:30.854193] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:18:29.378 [2024-10-15 08:29:30.854313] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80349 ] 00:18:29.378 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:29.378 Zero copy mechanism will not be used. 00:18:29.378 [2024-10-15 08:29:30.991533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.378 [2024-10-15 08:29:31.067487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.315 08:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:30.315 08:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:18:30.315 08:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:30.315 08:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:30.315 08:29:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:30.574 [2024-10-15 08:29:32.225092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:30.574 08:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:30.574 08:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:31.142 nvme0n1 00:18:31.142 08:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:31.142 08:29:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:31.142 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:31.142 Zero copy mechanism will not be used. 00:18:31.142 Running I/O for 2 seconds... 00:18:33.454 7536.00 IOPS, 942.00 MiB/s [2024-10-15T08:29:35.185Z] 7528.00 IOPS, 941.00 MiB/s 00:18:33.454 Latency(us) 00:18:33.454 [2024-10-15T08:29:35.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.454 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:33.454 nvme0n1 : 2.00 7527.10 940.89 0.00 0.00 2122.17 1832.03 3932.16 00:18:33.454 [2024-10-15T08:29:35.185Z] =================================================================================================================== 00:18:33.454 [2024-10-15T08:29:35.185Z] Total : 7527.10 940.89 0.00 0.00 2122.17 1832.03 3932.16 00:18:33.454 { 00:18:33.454 "results": [ 00:18:33.454 { 00:18:33.454 "job": "nvme0n1", 00:18:33.454 "core_mask": "0x2", 00:18:33.454 "workload": "randread", 00:18:33.454 "status": "finished", 00:18:33.454 "queue_depth": 16, 00:18:33.454 "io_size": 131072, 00:18:33.454 "runtime": 2.002364, 00:18:33.454 "iops": 7527.102964296202, 00:18:33.454 "mibps": 940.8878705370253, 00:18:33.454 "io_failed": 0, 00:18:33.454 "io_timeout": 0, 00:18:33.454 "avg_latency_us": 2122.16736537348, 00:18:33.454 "min_latency_us": 1832.0290909090909, 00:18:33.454 "max_latency_us": 3932.16 00:18:33.454 } 00:18:33.454 ], 00:18:33.454 "core_count": 1 00:18:33.454 } 00:18:33.454 08:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:33.454 08:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:33.454 08:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:33.454 08:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:33.454 | select(.opcode=="crc32c") 00:18:33.454 | "\(.module_name) \(.executed)"' 00:18:33.454 08:29:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:33.454 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:33.454 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:33.454 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:33.454 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:33.454 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80349 00:18:33.454 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 80349 ']' 00:18:33.454 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 80349 00:18:33.454 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:18:33.454 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:33.454 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80349 00:18:33.454 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:33.454 killing process with pid 80349 00:18:33.454 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:33.454 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80349' 00:18:33.454 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 80349 00:18:33.454 Received shutdown signal, test time was about 2.000000 seconds 00:18:33.454 00:18:33.454 Latency(us) 00:18:33.454 [2024-10-15T08:29:35.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.454 [2024-10-15T08:29:35.185Z] =================================================================================================================== 00:18:33.454 [2024-10-15T08:29:35.185Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:33.454 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 80349 00:18:33.712 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:18:33.712 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:33.712 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:33.712 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:33.712 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:33.712 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:33.712 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:33.712 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80409 00:18:33.712 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80409 /var/tmp/bperf.sock 00:18:33.712 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:33.712 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 80409 ']' 00:18:33.712 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:33.712 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:33.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:33.712 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:33.712 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:33.712 08:29:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:33.972 [2024-10-15 08:29:35.451431] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:18:33.972 [2024-10-15 08:29:35.451571] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80409 ] 00:18:33.972 [2024-10-15 08:29:35.584643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.972 [2024-10-15 08:29:35.663056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.935 08:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:34.935 08:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:18:34.935 08:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:34.935 08:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:34.935 08:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:35.194 [2024-10-15 08:29:36.812115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:35.194 08:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:35.194 08:29:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:35.762 nvme0n1 00:18:35.762 08:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:35.762 08:29:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:35.762 Running I/O for 2 seconds... 00:18:37.637 16130.00 IOPS, 63.01 MiB/s [2024-10-15T08:29:39.368Z] 16193.00 IOPS, 63.25 MiB/s 00:18:37.637 Latency(us) 00:18:37.637 [2024-10-15T08:29:39.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.637 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:37.637 nvme0n1 : 2.01 16234.73 63.42 0.00 0.00 7877.76 6911.07 15371.17 00:18:37.637 [2024-10-15T08:29:39.368Z] =================================================================================================================== 00:18:37.637 [2024-10-15T08:29:39.368Z] Total : 16234.73 63.42 0.00 0.00 7877.76 6911.07 15371.17 00:18:37.637 { 00:18:37.637 "results": [ 00:18:37.637 { 00:18:37.637 "job": "nvme0n1", 00:18:37.637 "core_mask": "0x2", 00:18:37.637 "workload": "randwrite", 00:18:37.637 "status": "finished", 00:18:37.637 "queue_depth": 128, 00:18:37.637 "io_size": 4096, 00:18:37.637 "runtime": 2.010566, 00:18:37.637 "iops": 16234.731911312536, 00:18:37.637 "mibps": 63.41692152856459, 00:18:37.637 "io_failed": 0, 00:18:37.637 "io_timeout": 0, 00:18:37.637 "avg_latency_us": 7877.755253264856, 00:18:37.637 "min_latency_us": 6911.069090909091, 00:18:37.637 "max_latency_us": 15371.17090909091 00:18:37.637 } 00:18:37.637 ], 00:18:37.637 "core_count": 1 00:18:37.637 } 00:18:37.896 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:37.896 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:37.896 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:37.896 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:37.896 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:37.896 | select(.opcode=="crc32c") 00:18:37.896 | "\(.module_name) \(.executed)"' 00:18:38.155 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:38.155 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:38.155 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:38.155 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:38.155 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80409 00:18:38.155 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 80409 ']' 00:18:38.155 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 80409 00:18:38.155 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:18:38.155 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:38.155 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80409 00:18:38.155 killing process with pid 80409 00:18:38.155 Received shutdown signal, test time was about 2.000000 seconds 00:18:38.155 00:18:38.155 Latency(us) 00:18:38.155 [2024-10-15T08:29:39.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.155 [2024-10-15T08:29:39.886Z] =================================================================================================================== 00:18:38.155 [2024-10-15T08:29:39.886Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:38.155 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:38.155 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:38.155 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80409' 00:18:38.155 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 80409 00:18:38.155 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 80409 00:18:38.414 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:18:38.414 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:38.414 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:38.414 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:38.414 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:38.414 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:38.414 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:38.414 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80476 00:18:38.414 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:38.414 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80476 /var/tmp/bperf.sock 00:18:38.414 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 80476 ']' 00:18:38.414 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:38.414 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:38.414 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:38.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:38.414 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:38.414 08:29:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:38.414 [2024-10-15 08:29:40.010832] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:18:38.414 [2024-10-15 08:29:40.011281] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80476 ] 00:18:38.414 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:38.414 Zero copy mechanism will not be used. 00:18:38.709 [2024-10-15 08:29:40.161997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.709 [2024-10-15 08:29:40.240403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.650 08:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:39.650 08:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:18:39.650 08:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:39.650 08:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:39.650 08:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:39.910 [2024-10-15 08:29:41.386682] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:39.910 08:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:39.910 08:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:40.169 nvme0n1 00:18:40.169 08:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:40.169 08:29:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:40.427 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:40.427 Zero copy mechanism will not be used. 00:18:40.428 Running I/O for 2 seconds... 00:18:42.298 6271.00 IOPS, 783.88 MiB/s [2024-10-15T08:29:44.029Z] 6326.00 IOPS, 790.75 MiB/s 00:18:42.298 Latency(us) 00:18:42.298 [2024-10-15T08:29:44.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.298 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:42.298 nvme0n1 : 2.00 6323.10 790.39 0.00 0.00 2524.53 1891.61 9294.20 00:18:42.298 [2024-10-15T08:29:44.029Z] =================================================================================================================== 00:18:42.298 [2024-10-15T08:29:44.029Z] Total : 6323.10 790.39 0.00 0.00 2524.53 1891.61 9294.20 00:18:42.298 { 00:18:42.298 "results": [ 00:18:42.298 { 00:18:42.298 "job": "nvme0n1", 00:18:42.298 "core_mask": "0x2", 00:18:42.298 "workload": "randwrite", 00:18:42.298 "status": "finished", 00:18:42.298 "queue_depth": 16, 00:18:42.298 "io_size": 131072, 00:18:42.298 "runtime": 2.00424, 00:18:42.298 "iops": 6323.095038518341, 00:18:42.298 "mibps": 790.3868798147927, 00:18:42.298 "io_failed": 0, 00:18:42.298 "io_timeout": 0, 00:18:42.298 "avg_latency_us": 2524.5346628121347, 00:18:42.298 "min_latency_us": 1891.6072727272726, 00:18:42.298 "max_latency_us": 9294.196363636363 00:18:42.298 } 00:18:42.298 ], 00:18:42.298 "core_count": 1 00:18:42.298 } 00:18:42.298 08:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:42.298 08:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:42.298 08:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:42.298 08:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:42.298 08:29:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:42.298 | select(.opcode=="crc32c") 00:18:42.298 | "\(.module_name) \(.executed)"' 00:18:42.557 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:42.557 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:42.557 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:42.557 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:42.557 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80476 00:18:42.557 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 80476 ']' 00:18:42.557 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 80476 00:18:42.557 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:18:42.557 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:42.557 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80476 00:18:42.816 killing process with pid 80476 00:18:42.816 Received shutdown signal, test time was about 2.000000 seconds 00:18:42.816 00:18:42.816 Latency(us) 00:18:42.816 [2024-10-15T08:29:44.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.816 [2024-10-15T08:29:44.547Z] =================================================================================================================== 00:18:42.816 [2024-10-15T08:29:44.547Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:42.816 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:42.816 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:42.816 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80476' 00:18:42.816 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 80476 00:18:42.816 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 80476 00:18:43.075 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80270 00:18:43.075 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 80270 ']' 00:18:43.075 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 80270 00:18:43.075 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:18:43.075 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:43.075 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80270 00:18:43.075 killing process with pid 80270 00:18:43.075 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:43.075 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:43.075 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80270' 00:18:43.075 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 80270 00:18:43.075 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 80270 00:18:43.366 00:18:43.366 real 0m18.434s 00:18:43.366 user 0m36.352s 00:18:43.366 sys 0m4.910s 00:18:43.366 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:43.366 ************************************ 00:18:43.366 END TEST nvmf_digest_clean 00:18:43.366 ************************************ 00:18:43.366 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:43.366 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:18:43.366 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:43.366 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:43.366 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:43.366 ************************************ 00:18:43.366 START TEST nvmf_digest_error 00:18:43.366 ************************************ 00:18:43.366 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:18:43.366 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:18:43.366 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:43.366 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:43.366 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:43.366 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=80559 00:18:43.366 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 80559 00:18:43.366 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:43.366 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80559 ']' 00:18:43.366 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.366 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:43.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.366 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.366 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:43.366 08:29:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:43.366 [2024-10-15 08:29:44.971791] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:18:43.366 [2024-10-15 08:29:44.971893] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.652 [2024-10-15 08:29:45.108105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.652 [2024-10-15 08:29:45.186290] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.652 [2024-10-15 08:29:45.186600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.652 [2024-10-15 08:29:45.186647] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.652 [2024-10-15 08:29:45.186659] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.652 [2024-10-15 08:29:45.186666] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.652 [2024-10-15 08:29:45.187152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.589 08:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:44.589 08:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:44.589 08:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:44.589 08:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:44.589 08:29:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:44.589 [2024-10-15 08:29:46.027743] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:44.589 [2024-10-15 08:29:46.110468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:44.589 null0 00:18:44.589 [2024-10-15 08:29:46.172754] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.589 [2024-10-15 08:29:46.196889] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80597 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80597 /var/tmp/bperf.sock 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80597 ']' 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:44.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:44.589 08:29:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:44.589 [2024-10-15 08:29:46.265020] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:18:44.589 [2024-10-15 08:29:46.265375] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80597 ] 00:18:44.848 [2024-10-15 08:29:46.406880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.848 [2024-10-15 08:29:46.488872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.848 [2024-10-15 08:29:46.565059] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:45.784 08:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:45.784 08:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:45.784 08:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:45.784 08:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:45.784 08:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:45.784 08:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.784 08:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:45.784 08:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.784 08:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:45.784 08:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:46.352 nvme0n1 00:18:46.352 08:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:46.352 08:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.352 08:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:46.352 08:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.352 08:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:46.352 08:29:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:46.352 Running I/O for 2 seconds... 00:18:46.352 [2024-10-15 08:29:48.021217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.352 [2024-10-15 08:29:48.021286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.352 [2024-10-15 08:29:48.021304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.352 [2024-10-15 08:29:48.038883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.352 [2024-10-15 08:29:48.039104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.352 [2024-10-15 08:29:48.039156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.352 [2024-10-15 08:29:48.056111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.352 [2024-10-15 08:29:48.056186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.352 [2024-10-15 08:29:48.056218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.352 [2024-10-15 08:29:48.073026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.352 [2024-10-15 08:29:48.073063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.352 [2024-10-15 08:29:48.073104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.611 [2024-10-15 08:29:48.089012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.611 [2024-10-15 08:29:48.089051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.611 [2024-10-15 08:29:48.089096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.611 [2024-10-15 08:29:48.104986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.611 [2024-10-15 08:29:48.105025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.611 [2024-10-15 08:29:48.105054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.611 [2024-10-15 08:29:48.120609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.611 [2024-10-15 08:29:48.120646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.611 [2024-10-15 08:29:48.120676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.611 [2024-10-15 08:29:48.136455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.611 [2024-10-15 08:29:48.136494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.611 [2024-10-15 08:29:48.136523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.611 [2024-10-15 08:29:48.153555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.611 [2024-10-15 08:29:48.153595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.611 [2024-10-15 08:29:48.153624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.611 [2024-10-15 08:29:48.170890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.611 [2024-10-15 08:29:48.171142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.611 [2024-10-15 08:29:48.171161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.611 [2024-10-15 08:29:48.188530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.611 [2024-10-15 08:29:48.188730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.611 [2024-10-15 08:29:48.188765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.611 [2024-10-15 08:29:48.206340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.611 [2024-10-15 08:29:48.206549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.611 [2024-10-15 08:29:48.206583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.611 [2024-10-15 08:29:48.223365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.611 [2024-10-15 08:29:48.223570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.611 [2024-10-15 08:29:48.223770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.611 [2024-10-15 08:29:48.241051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.611 [2024-10-15 08:29:48.241283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.611 [2024-10-15 08:29:48.241413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.611 [2024-10-15 08:29:48.258346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.611 [2024-10-15 08:29:48.258528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.611 [2024-10-15 08:29:48.258655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.611 [2024-10-15 08:29:48.276146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.611 [2024-10-15 08:29:48.276346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.611 [2024-10-15 08:29:48.276492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.611 [2024-10-15 08:29:48.293767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.611 [2024-10-15 08:29:48.293972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.611 [2024-10-15 08:29:48.294114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.611 [2024-10-15 08:29:48.311061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.611 [2024-10-15 08:29:48.311285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.611 [2024-10-15 08:29:48.311410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.611 [2024-10-15 08:29:48.328789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.611 [2024-10-15 08:29:48.328971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.611 [2024-10-15 08:29:48.329114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.870 [2024-10-15 08:29:48.346569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.870 [2024-10-15 08:29:48.346771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.870 [2024-10-15 08:29:48.346914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.870 [2024-10-15 08:29:48.363857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.870 [2024-10-15 08:29:48.364051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.870 [2024-10-15 08:29:48.364201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.870 [2024-10-15 08:29:48.381258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.870 [2024-10-15 08:29:48.381467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.870 [2024-10-15 08:29:48.381614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.870 [2024-10-15 08:29:48.398564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.870 [2024-10-15 08:29:48.398607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.870 [2024-10-15 08:29:48.398621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.870 [2024-10-15 08:29:48.415823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.870 [2024-10-15 08:29:48.415861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.870 [2024-10-15 08:29:48.415892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.870 [2024-10-15 08:29:48.432748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.870 [2024-10-15 08:29:48.432786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.870 [2024-10-15 08:29:48.432814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.870 [2024-10-15 08:29:48.448687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.870 [2024-10-15 08:29:48.448725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.870 [2024-10-15 08:29:48.448754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.870 [2024-10-15 08:29:48.464693] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.870 [2024-10-15 08:29:48.464731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.870 [2024-10-15 08:29:48.464761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.870 [2024-10-15 08:29:48.481611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.870 [2024-10-15 08:29:48.481794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.870 [2024-10-15 08:29:48.481827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.870 [2024-10-15 08:29:48.497789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.870 [2024-10-15 08:29:48.497827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.870 [2024-10-15 08:29:48.497856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.870 [2024-10-15 08:29:48.514196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.870 [2024-10-15 08:29:48.514244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.871 [2024-10-15 08:29:48.514258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.871 [2024-10-15 08:29:48.530994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.871 [2024-10-15 08:29:48.531031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.871 [2024-10-15 08:29:48.531059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.871 [2024-10-15 08:29:48.547412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.871 [2024-10-15 08:29:48.547449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.871 [2024-10-15 08:29:48.547477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.871 [2024-10-15 08:29:48.563902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.871 [2024-10-15 08:29:48.563940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.871 [2024-10-15 08:29:48.563968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.871 [2024-10-15 08:29:48.580904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.871 [2024-10-15 08:29:48.580942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.871 [2024-10-15 08:29:48.580970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:46.871 [2024-10-15 08:29:48.596892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:46.871 [2024-10-15 08:29:48.596931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:46.871 [2024-10-15 08:29:48.596960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.130 [2024-10-15 08:29:48.613242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.130 [2024-10-15 08:29:48.613282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.130 [2024-10-15 08:29:48.613296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.130 [2024-10-15 08:29:48.630001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.130 [2024-10-15 08:29:48.630039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.130 [2024-10-15 08:29:48.630068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.130 [2024-10-15 08:29:48.645635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.130 [2024-10-15 08:29:48.645670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.130 [2024-10-15 08:29:48.645698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.130 [2024-10-15 08:29:48.661747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.130 [2024-10-15 08:29:48.661785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.130 [2024-10-15 08:29:48.661814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.130 [2024-10-15 08:29:48.678468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.130 [2024-10-15 08:29:48.678674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.130 [2024-10-15 08:29:48.678708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.130 [2024-10-15 08:29:48.696198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.130 [2024-10-15 08:29:48.696259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.130 [2024-10-15 08:29:48.696290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.130 [2024-10-15 08:29:48.713329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.130 [2024-10-15 08:29:48.713366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.130 [2024-10-15 08:29:48.713397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.130 [2024-10-15 08:29:48.729719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.130 [2024-10-15 08:29:48.729756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.130 [2024-10-15 08:29:48.729785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.130 [2024-10-15 08:29:48.745663] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.130 [2024-10-15 08:29:48.745700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.130 [2024-10-15 08:29:48.745729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.130 [2024-10-15 08:29:48.761434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.130 [2024-10-15 08:29:48.761470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.130 [2024-10-15 08:29:48.761498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.130 [2024-10-15 08:29:48.777506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.130 [2024-10-15 08:29:48.777543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.130 [2024-10-15 08:29:48.777571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.130 [2024-10-15 08:29:48.794307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.130 [2024-10-15 08:29:48.794475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.130 [2024-10-15 08:29:48.794493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.130 [2024-10-15 08:29:48.811946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.130 [2024-10-15 08:29:48.811986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.130 [2024-10-15 08:29:48.812016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.130 [2024-10-15 08:29:48.828510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.130 [2024-10-15 08:29:48.828672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.130 [2024-10-15 08:29:48.828692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.130 [2024-10-15 08:29:48.845079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.130 [2024-10-15 08:29:48.845162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.130 [2024-10-15 08:29:48.845194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.390 [2024-10-15 08:29:48.861300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.390 [2024-10-15 08:29:48.861337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.390 [2024-10-15 08:29:48.861366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.390 [2024-10-15 08:29:48.877583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.390 [2024-10-15 08:29:48.877620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.390 [2024-10-15 08:29:48.877649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.390 [2024-10-15 08:29:48.894223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.390 [2024-10-15 08:29:48.894263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.390 [2024-10-15 08:29:48.894277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.390 [2024-10-15 08:29:48.910894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.390 [2024-10-15 08:29:48.911096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.390 [2024-10-15 08:29:48.911129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.390 [2024-10-15 08:29:48.927191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.390 [2024-10-15 08:29:48.927228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.390 [2024-10-15 08:29:48.927257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.390 [2024-10-15 08:29:48.943044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.390 [2024-10-15 08:29:48.943082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.390 [2024-10-15 08:29:48.943111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.390 [2024-10-15 08:29:48.958938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.390 [2024-10-15 08:29:48.959156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.390 [2024-10-15 08:29:48.959176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.390 [2024-10-15 08:29:48.975137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.390 [2024-10-15 08:29:48.975398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.390 [2024-10-15 08:29:48.975522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.390 [2024-10-15 08:29:48.990819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.390 [2024-10-15 08:29:48.991031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.390 [2024-10-15 08:29:48.991275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.390 15054.00 IOPS, 58.80 MiB/s [2024-10-15T08:29:49.121Z] [2024-10-15 08:29:49.007632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.390 [2024-10-15 08:29:49.007829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.390 [2024-10-15 08:29:49.007968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.390 [2024-10-15 08:29:49.023405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.390 [2024-10-15 08:29:49.023604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.390 [2024-10-15 08:29:49.023742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.390 [2024-10-15 08:29:49.041448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.390 [2024-10-15 08:29:49.041626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.390 [2024-10-15 08:29:49.041754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.390 [2024-10-15 08:29:49.059645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.390 [2024-10-15 08:29:49.059827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.390 [2024-10-15 08:29:49.060063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.390 [2024-10-15 08:29:49.084224] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.390 [2024-10-15 08:29:49.084444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.390 [2024-10-15 08:29:49.084593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.390 [2024-10-15 08:29:49.101328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.390 [2024-10-15 08:29:49.101366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.390 [2024-10-15 08:29:49.101395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.390 [2024-10-15 08:29:49.117861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.390 [2024-10-15 08:29:49.117900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.390 [2024-10-15 08:29:49.117928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.650 [2024-10-15 08:29:49.133931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.650 [2024-10-15 08:29:49.133969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.650 [2024-10-15 08:29:49.133997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.650 [2024-10-15 08:29:49.150266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.650 [2024-10-15 08:29:49.150456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.650 [2024-10-15 08:29:49.150507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.650 [2024-10-15 08:29:49.167021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.650 [2024-10-15 08:29:49.167058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.650 [2024-10-15 08:29:49.167087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.650 [2024-10-15 08:29:49.182238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.650 [2024-10-15 08:29:49.182276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.650 [2024-10-15 08:29:49.182305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.650 [2024-10-15 08:29:49.198101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.650 [2024-10-15 08:29:49.198195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.650 [2024-10-15 08:29:49.198212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.650 [2024-10-15 08:29:49.215353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.650 [2024-10-15 08:29:49.215536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.650 [2024-10-15 08:29:49.215570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.650 [2024-10-15 08:29:49.231556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.650 [2024-10-15 08:29:49.231594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.650 [2024-10-15 08:29:49.231622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.650 [2024-10-15 08:29:49.248033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.650 [2024-10-15 08:29:49.248072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.650 [2024-10-15 08:29:49.248102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.650 [2024-10-15 08:29:49.264303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.650 [2024-10-15 08:29:49.264340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.650 [2024-10-15 08:29:49.264369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.650 [2024-10-15 08:29:49.279834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.650 [2024-10-15 08:29:49.279871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.650 [2024-10-15 08:29:49.279900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.650 [2024-10-15 08:29:49.296868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.650 [2024-10-15 08:29:49.296907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.650 [2024-10-15 08:29:49.296936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.650 [2024-10-15 08:29:49.314384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.650 [2024-10-15 08:29:49.314602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.650 [2024-10-15 08:29:49.314634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.650 [2024-10-15 08:29:49.331898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.650 [2024-10-15 08:29:49.331936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.650 [2024-10-15 08:29:49.331965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.650 [2024-10-15 08:29:49.349258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.650 [2024-10-15 08:29:49.349297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.650 [2024-10-15 08:29:49.349310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.650 [2024-10-15 08:29:49.366365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.650 [2024-10-15 08:29:49.366405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.650 [2024-10-15 08:29:49.366423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.910 [2024-10-15 08:29:49.382778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.910 [2024-10-15 08:29:49.382815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.910 [2024-10-15 08:29:49.382844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.910 [2024-10-15 08:29:49.398398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.910 [2024-10-15 08:29:49.398618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.910 [2024-10-15 08:29:49.398652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.910 [2024-10-15 08:29:49.415869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.910 [2024-10-15 08:29:49.415909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.910 [2024-10-15 08:29:49.415940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.910 [2024-10-15 08:29:49.433437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.910 [2024-10-15 08:29:49.433507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.910 [2024-10-15 08:29:49.433537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.910 [2024-10-15 08:29:49.450072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.910 [2024-10-15 08:29:49.450107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.910 [2024-10-15 08:29:49.450191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.910 [2024-10-15 08:29:49.465865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.910 [2024-10-15 08:29:49.465902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.910 [2024-10-15 08:29:49.465931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.910 [2024-10-15 08:29:49.481459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.910 [2024-10-15 08:29:49.481496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.910 [2024-10-15 08:29:49.481524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.910 [2024-10-15 08:29:49.497164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.910 [2024-10-15 08:29:49.497200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.910 [2024-10-15 08:29:49.497228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.910 [2024-10-15 08:29:49.512763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.910 [2024-10-15 08:29:49.512800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.910 [2024-10-15 08:29:49.512828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.910 [2024-10-15 08:29:49.528557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.910 [2024-10-15 08:29:49.528596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.910 [2024-10-15 08:29:49.528625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.910 [2024-10-15 08:29:49.544566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.910 [2024-10-15 08:29:49.544604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.910 [2024-10-15 08:29:49.544634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.910 [2024-10-15 08:29:49.560410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.910 [2024-10-15 08:29:49.560448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.910 [2024-10-15 08:29:49.560478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.910 [2024-10-15 08:29:49.576262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.911 [2024-10-15 08:29:49.576299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.911 [2024-10-15 08:29:49.576327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.911 [2024-10-15 08:29:49.592123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.911 [2024-10-15 08:29:49.592169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.911 [2024-10-15 08:29:49.592212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.911 [2024-10-15 08:29:49.607540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.911 [2024-10-15 08:29:49.607576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.911 [2024-10-15 08:29:49.607604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.911 [2024-10-15 08:29:49.623518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.911 [2024-10-15 08:29:49.623567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.911 [2024-10-15 08:29:49.623595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.911 [2024-10-15 08:29:49.639028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:47.911 [2024-10-15 08:29:49.639271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.911 [2024-10-15 08:29:49.639288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.170 [2024-10-15 08:29:49.654847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:48.170 [2024-10-15 08:29:49.655030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.170 [2024-10-15 08:29:49.655062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.170 [2024-10-15 08:29:49.670339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:48.170 [2024-10-15 08:29:49.670377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.170 [2024-10-15 08:29:49.670406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.170 [2024-10-15 08:29:49.685476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:48.170 [2024-10-15 08:29:49.685671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.170 [2024-10-15 08:29:49.685705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.170 [2024-10-15 08:29:49.702861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:48.170 [2024-10-15 08:29:49.703044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.170 [2024-10-15 08:29:49.703077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.170 [2024-10-15 08:29:49.719783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:48.170 [2024-10-15 08:29:49.719820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.170 [2024-10-15 08:29:49.719849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.170 [2024-10-15 08:29:49.736139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:48.170 [2024-10-15 08:29:49.736201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.170 [2024-10-15 08:29:49.736215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.170 [2024-10-15 08:29:49.752904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:48.170 [2024-10-15 08:29:49.752942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.170 [2024-10-15 08:29:49.752970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.170 [2024-10-15 08:29:49.769301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:48.170 [2024-10-15 08:29:49.769338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.170 [2024-10-15 08:29:49.769366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.170 [2024-10-15 08:29:49.784992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:48.170 [2024-10-15 08:29:49.785030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.170 [2024-10-15 08:29:49.785058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.170 [2024-10-15 08:29:49.801600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:48.170 [2024-10-15 08:29:49.801637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.170 [2024-10-15 08:29:49.801665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.170 [2024-10-15 08:29:49.818379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:48.170 [2024-10-15 08:29:49.818573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.170 [2024-10-15 08:29:49.818606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.170 [2024-10-15 08:29:49.834357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:48.170 [2024-10-15 08:29:49.834397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.170 [2024-10-15 08:29:49.834411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.170 [2024-10-15 08:29:49.850059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:48.170 [2024-10-15 08:29:49.850246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.171 [2024-10-15 08:29:49.850263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.171 [2024-10-15 08:29:49.867358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:48.171 [2024-10-15 08:29:49.867396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.171 [2024-10-15 08:29:49.867425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.171 [2024-10-15 08:29:49.882727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:48.171 [2024-10-15 08:29:49.882763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.171 [2024-10-15 08:29:49.882792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.430 [2024-10-15 08:29:49.900119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:48.430 [2024-10-15 08:29:49.900307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.430 [2024-10-15 08:29:49.900326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.430 [2024-10-15 08:29:49.917104] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:48.430 [2024-10-15 08:29:49.917193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.430 [2024-10-15 08:29:49.917209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.430 [2024-10-15 08:29:49.933769] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:48.430 [2024-10-15 08:29:49.933809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.430 [2024-10-15 08:29:49.933840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.430 [2024-10-15 08:29:49.950865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:48.430 [2024-10-15 08:29:49.950902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.430 [2024-10-15 08:29:49.950937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.430 [2024-10-15 08:29:49.968081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:48.430 [2024-10-15 08:29:49.968262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.430 [2024-10-15 08:29:49.968280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.430 [2024-10-15 08:29:49.985731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:48.430 [2024-10-15 08:29:49.985774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.430 [2024-10-15 08:29:49.985789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.430 15244.00 IOPS, 59.55 MiB/s [2024-10-15T08:29:50.161Z] [2024-10-15 08:29:50.002637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11f72a0) 00:18:48.430 [2024-10-15 08:29:50.002808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.430 [2024-10-15 08:29:50.002842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.430 00:18:48.430 Latency(us) 00:18:48.430 [2024-10-15T08:29:50.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.430 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:48.430 nvme0n1 : 2.01 15247.62 59.56 0.00 0.00 8386.73 7328.12 32648.84 00:18:48.430 [2024-10-15T08:29:50.161Z] =================================================================================================================== 00:18:48.430 [2024-10-15T08:29:50.161Z] Total : 15247.62 59.56 0.00 0.00 8386.73 7328.12 32648.84 00:18:48.430 { 00:18:48.430 "results": [ 00:18:48.430 { 00:18:48.430 "job": "nvme0n1", 00:18:48.430 "core_mask": "0x2", 00:18:48.430 "workload": "randread", 00:18:48.430 "status": "finished", 00:18:48.430 "queue_depth": 128, 00:18:48.430 "io_size": 4096, 00:18:48.430 "runtime": 2.00792, 00:18:48.430 "iops": 15247.619427068808, 00:18:48.430 "mibps": 59.56101338698753, 00:18:48.430 "io_failed": 0, 00:18:48.430 "io_timeout": 0, 00:18:48.430 "avg_latency_us": 8386.731489654845, 00:18:48.430 "min_latency_us": 7328.1163636363635, 00:18:48.430 "max_latency_us": 32648.843636363636 00:18:48.430 } 00:18:48.430 ], 00:18:48.430 "core_count": 1 00:18:48.430 } 00:18:48.430 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:48.430 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:48.430 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:48.430 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:48.430 | .driver_specific 00:18:48.430 | .nvme_error 00:18:48.430 | .status_code 00:18:48.430 | .command_transient_transport_error' 00:18:48.690 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 120 > 0 )) 00:18:48.690 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80597 00:18:48.690 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80597 ']' 00:18:48.690 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80597 00:18:48.690 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:48.690 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:48.690 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80597 00:18:48.690 killing process with pid 80597 00:18:48.690 Received shutdown signal, test time was about 2.000000 seconds 00:18:48.690 00:18:48.690 Latency(us) 00:18:48.690 [2024-10-15T08:29:50.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.690 [2024-10-15T08:29:50.421Z] =================================================================================================================== 00:18:48.690 [2024-10-15T08:29:50.421Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:48.690 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:48.690 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:48.690 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80597' 00:18:48.690 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80597 00:18:48.690 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80597 00:18:48.949 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:48.949 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:48.949 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:48.949 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:48.949 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:48.949 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80653 00:18:48.949 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:48.949 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80653 /var/tmp/bperf.sock 00:18:48.949 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80653 ']' 00:18:48.949 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:48.949 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:48.949 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:48.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:48.949 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:48.949 08:29:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:48.949 [2024-10-15 08:29:50.669954] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:18:48.949 [2024-10-15 08:29:50.670273] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80653 ] 00:18:48.949 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:48.949 Zero copy mechanism will not be used. 00:18:49.208 [2024-10-15 08:29:50.803351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.208 [2024-10-15 08:29:50.873549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.467 [2024-10-15 08:29:50.943644] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:49.467 08:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:49.467 08:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:49.467 08:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:49.467 08:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:49.726 08:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:49.726 08:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.726 08:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:49.726 08:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.726 08:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:49.726 08:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:49.984 nvme0n1 00:18:49.985 08:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:49.985 08:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.985 08:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:49.985 08:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.985 08:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:49.985 08:29:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:50.245 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:50.245 Zero copy mechanism will not be used. 00:18:50.245 Running I/O for 2 seconds... 00:18:50.245 [2024-10-15 08:29:51.818692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.818749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.818782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.823150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.823205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.823221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.827556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.827598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.827629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.832102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.832158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.832174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.836569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.836609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.836639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.841218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.841257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.841271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.845629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.845672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.845686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.850221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.850261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.850274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.854689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.854728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.854757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.859210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.859251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.859264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.863506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.863544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.863573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.867881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.867920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.867950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.872175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.872214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.872244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.876700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.876878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.876896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.881431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.881485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.881515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.885952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.885995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.886009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.890457] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.890529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.890558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.895100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.895156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.895171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.899315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.899353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.899382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.903548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.903587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.903616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.907677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.907715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.907744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.912063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.912100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.912123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.916537] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.916574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.916603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.920790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.920827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.920856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.925197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.925234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.925262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.929434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.929474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.245 [2024-10-15 08:29:51.929488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.245 [2024-10-15 08:29:51.933853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.245 [2024-10-15 08:29:51.933891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.246 [2024-10-15 08:29:51.933920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.246 [2024-10-15 08:29:51.938345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.246 [2024-10-15 08:29:51.938385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.246 [2024-10-15 08:29:51.938399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.246 [2024-10-15 08:29:51.942744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.246 [2024-10-15 08:29:51.942781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.246 [2024-10-15 08:29:51.942809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.246 [2024-10-15 08:29:51.947287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.246 [2024-10-15 08:29:51.947322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.246 [2024-10-15 08:29:51.947335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.246 [2024-10-15 08:29:51.951736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.246 [2024-10-15 08:29:51.951774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.246 [2024-10-15 08:29:51.951802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.246 [2024-10-15 08:29:51.956013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.246 [2024-10-15 08:29:51.956051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.246 [2024-10-15 08:29:51.956080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.246 [2024-10-15 08:29:51.960492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.246 [2024-10-15 08:29:51.960529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.246 [2024-10-15 08:29:51.960557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.246 [2024-10-15 08:29:51.964718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.246 [2024-10-15 08:29:51.964755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.246 [2024-10-15 08:29:51.964783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.246 [2024-10-15 08:29:51.968941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.246 [2024-10-15 08:29:51.968978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.246 [2024-10-15 08:29:51.969007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.246 [2024-10-15 08:29:51.973246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.246 [2024-10-15 08:29:51.973282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.246 [2024-10-15 08:29:51.973310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.506 [2024-10-15 08:29:51.977354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.506 [2024-10-15 08:29:51.977390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.506 [2024-10-15 08:29:51.977419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.506 [2024-10-15 08:29:51.981648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.506 [2024-10-15 08:29:51.981686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.506 [2024-10-15 08:29:51.981714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.506 [2024-10-15 08:29:51.986216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.506 [2024-10-15 08:29:51.986255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.506 [2024-10-15 08:29:51.986269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.506 [2024-10-15 08:29:51.990673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.506 [2024-10-15 08:29:51.990710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.506 [2024-10-15 08:29:51.990739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.506 [2024-10-15 08:29:51.995053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.506 [2024-10-15 08:29:51.995092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.506 [2024-10-15 08:29:51.995121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.506 [2024-10-15 08:29:51.999453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.506 [2024-10-15 08:29:51.999488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.506 [2024-10-15 08:29:51.999516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.506 [2024-10-15 08:29:52.003873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.506 [2024-10-15 08:29:52.003910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.506 [2024-10-15 08:29:52.003939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.506 [2024-10-15 08:29:52.008113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.506 [2024-10-15 08:29:52.008182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.506 [2024-10-15 08:29:52.008211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.506 [2024-10-15 08:29:52.012264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.506 [2024-10-15 08:29:52.012301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.506 [2024-10-15 08:29:52.012330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.506 [2024-10-15 08:29:52.016416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.016469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.016498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.020807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.020843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.020887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.025007] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.025043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.025072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.029493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.029529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.029557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.033870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.033907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.033954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.038463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.038503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.038517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.042962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.043000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.043029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.047518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.047701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.047733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.052031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.052094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.052123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.056294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.056330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.056343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.060476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.060513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.060539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.064503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.064554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.064582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.068637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.068673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.068701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.072626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.072662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.072690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.076632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.076666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.076693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.080962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.081001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.081030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.085445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.085481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.085508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.089812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.089848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.089877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.094355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.094394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.094407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.098799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.098835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.098865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.103439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.103468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.103495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.107864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.107901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.107930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.112226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.112260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.112274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.116642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.116680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.116709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.121161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.121210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.121225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.125738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.125939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.125957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.130426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.130468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.130481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.134819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.134855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.507 [2024-10-15 08:29:52.134884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.507 [2024-10-15 08:29:52.139214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.507 [2024-10-15 08:29:52.139252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.139266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.508 [2024-10-15 08:29:52.143714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.508 [2024-10-15 08:29:52.143749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.143778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.508 [2024-10-15 08:29:52.148320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.508 [2024-10-15 08:29:52.148357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.148370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.508 [2024-10-15 08:29:52.152643] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.508 [2024-10-15 08:29:52.152678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.152706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.508 [2024-10-15 08:29:52.156843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.508 [2024-10-15 08:29:52.156879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.156906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.508 [2024-10-15 08:29:52.160999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.508 [2024-10-15 08:29:52.161034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.161063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.508 [2024-10-15 08:29:52.165098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.508 [2024-10-15 08:29:52.165157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.165170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.508 [2024-10-15 08:29:52.169060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.508 [2024-10-15 08:29:52.169096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.169124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.508 [2024-10-15 08:29:52.173137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.508 [2024-10-15 08:29:52.173183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.173211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.508 [2024-10-15 08:29:52.177049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.508 [2024-10-15 08:29:52.177085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.177113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.508 [2024-10-15 08:29:52.181439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.508 [2024-10-15 08:29:52.181474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.181503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.508 [2024-10-15 08:29:52.185767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.508 [2024-10-15 08:29:52.185803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.185831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.508 [2024-10-15 08:29:52.190274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.508 [2024-10-15 08:29:52.190314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.190328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.508 [2024-10-15 08:29:52.194620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.508 [2024-10-15 08:29:52.194656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.194684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.508 [2024-10-15 08:29:52.199031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.508 [2024-10-15 08:29:52.199085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.199115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.508 [2024-10-15 08:29:52.203298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.508 [2024-10-15 08:29:52.203332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.203360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.508 [2024-10-15 08:29:52.207263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.508 [2024-10-15 08:29:52.207297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.207326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.508 [2024-10-15 08:29:52.211285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.508 [2024-10-15 08:29:52.211319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.211347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.508 [2024-10-15 08:29:52.215294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.508 [2024-10-15 08:29:52.215327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.215356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.508 [2024-10-15 08:29:52.219256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.508 [2024-10-15 08:29:52.219291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.219318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.508 [2024-10-15 08:29:52.223343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.508 [2024-10-15 08:29:52.223377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.223405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.508 [2024-10-15 08:29:52.227352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.508 [2024-10-15 08:29:52.227387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.227415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.508 [2024-10-15 08:29:52.231501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.508 [2024-10-15 08:29:52.231538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.508 [2024-10-15 08:29:52.231566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.769 [2024-10-15 08:29:52.235952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.769 [2024-10-15 08:29:52.235991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.769 [2024-10-15 08:29:52.236020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.769 [2024-10-15 08:29:52.240406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.769 [2024-10-15 08:29:52.240441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.769 [2024-10-15 08:29:52.240469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.769 [2024-10-15 08:29:52.244632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.769 [2024-10-15 08:29:52.244667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.769 [2024-10-15 08:29:52.244695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.769 [2024-10-15 08:29:52.248908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.769 [2024-10-15 08:29:52.248963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.769 [2024-10-15 08:29:52.248991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.769 [2024-10-15 08:29:52.253297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.769 [2024-10-15 08:29:52.253332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.769 [2024-10-15 08:29:52.253360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.769 [2024-10-15 08:29:52.257278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.769 [2024-10-15 08:29:52.257311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.769 [2024-10-15 08:29:52.257339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.769 [2024-10-15 08:29:52.261207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.769 [2024-10-15 08:29:52.261241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.769 [2024-10-15 08:29:52.261269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.769 [2024-10-15 08:29:52.265214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.769 [2024-10-15 08:29:52.265258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.769 [2024-10-15 08:29:52.265286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.769 [2024-10-15 08:29:52.269113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.769 [2024-10-15 08:29:52.269174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.769 [2024-10-15 08:29:52.269201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.769 [2024-10-15 08:29:52.273044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.769 [2024-10-15 08:29:52.273079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.769 [2024-10-15 08:29:52.273107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.769 [2024-10-15 08:29:52.277093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.769 [2024-10-15 08:29:52.277137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.769 [2024-10-15 08:29:52.277166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.769 [2024-10-15 08:29:52.281287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.769 [2024-10-15 08:29:52.281323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.769 [2024-10-15 08:29:52.281351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.769 [2024-10-15 08:29:52.285684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.769 [2024-10-15 08:29:52.285720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.769 [2024-10-15 08:29:52.285749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.769 [2024-10-15 08:29:52.290054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.769 [2024-10-15 08:29:52.290093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.769 [2024-10-15 08:29:52.290122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.769 [2024-10-15 08:29:52.294629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.769 [2024-10-15 08:29:52.294847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.769 [2024-10-15 08:29:52.294864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.769 [2024-10-15 08:29:52.299230] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.769 [2024-10-15 08:29:52.299270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.769 [2024-10-15 08:29:52.299284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.769 [2024-10-15 08:29:52.303706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.769 [2024-10-15 08:29:52.303775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.769 [2024-10-15 08:29:52.303804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.769 [2024-10-15 08:29:52.308233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.769 [2024-10-15 08:29:52.308272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.769 [2024-10-15 08:29:52.308285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.769 [2024-10-15 08:29:52.312607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.769 [2024-10-15 08:29:52.312644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.769 [2024-10-15 08:29:52.312673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.769 [2024-10-15 08:29:52.317060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.769 [2024-10-15 08:29:52.317132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.769 [2024-10-15 08:29:52.317148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.769 [2024-10-15 08:29:52.321471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.769 [2024-10-15 08:29:52.321523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.321553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.325723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.325907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.325941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.330231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.330271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.330285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.334541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.334577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.334605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.339104] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.339156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.339170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.343535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.343573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.343588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.348070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.348111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.348140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.352558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.352742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.352777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.357194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.357234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.357248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.361719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.361757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.361786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.366303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.366343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.366357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.370952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.370995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.371008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.375505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.375665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.375682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.380217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.380257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.380270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.384575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.384611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.384639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.389002] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.389039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.389067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.393468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.393519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.393547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.398082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.398135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.398159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.402433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.402599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.402616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.407093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.407145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.407160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.411704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.411743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.411772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.416058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.416099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.416113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.420472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.420510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.420540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.424784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.424822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.424851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.429219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.429257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.429270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.433695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.433733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.433762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.438283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.438324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.438337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.442649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.442687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.442716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.770 [2024-10-15 08:29:52.447214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.770 [2024-10-15 08:29:52.447268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.770 [2024-10-15 08:29:52.447313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.771 [2024-10-15 08:29:52.451563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.771 [2024-10-15 08:29:52.451601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.771 [2024-10-15 08:29:52.451630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.771 [2024-10-15 08:29:52.456052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.771 [2024-10-15 08:29:52.456093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.771 [2024-10-15 08:29:52.456107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.771 [2024-10-15 08:29:52.460336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.771 [2024-10-15 08:29:52.460373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.771 [2024-10-15 08:29:52.460402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.771 [2024-10-15 08:29:52.464681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.771 [2024-10-15 08:29:52.464720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.771 [2024-10-15 08:29:52.464749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.771 [2024-10-15 08:29:52.469051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.771 [2024-10-15 08:29:52.469106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.771 [2024-10-15 08:29:52.469155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.771 [2024-10-15 08:29:52.473527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.771 [2024-10-15 08:29:52.473564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.771 [2024-10-15 08:29:52.473593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.771 [2024-10-15 08:29:52.477911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.771 [2024-10-15 08:29:52.477950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.771 [2024-10-15 08:29:52.477980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:50.771 [2024-10-15 08:29:52.482341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.771 [2024-10-15 08:29:52.482380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.771 [2024-10-15 08:29:52.482394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.771 [2024-10-15 08:29:52.486771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.771 [2024-10-15 08:29:52.486810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.771 [2024-10-15 08:29:52.486823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:50.771 [2024-10-15 08:29:52.491077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.771 [2024-10-15 08:29:52.491146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.771 [2024-10-15 08:29:52.491176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:50.771 [2024-10-15 08:29:52.495480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:50.771 [2024-10-15 08:29:52.495518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.771 [2024-10-15 08:29:52.495547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.032 [2024-10-15 08:29:52.499766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.032 [2024-10-15 08:29:52.499804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.032 [2024-10-15 08:29:52.499834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.032 [2024-10-15 08:29:52.504260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.032 [2024-10-15 08:29:52.504299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.032 [2024-10-15 08:29:52.504313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.032 [2024-10-15 08:29:52.508692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.032 [2024-10-15 08:29:52.508731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.032 [2024-10-15 08:29:52.508761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.032 [2024-10-15 08:29:52.513086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.032 [2024-10-15 08:29:52.513153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.032 [2024-10-15 08:29:52.513167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.032 [2024-10-15 08:29:52.517568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.032 [2024-10-15 08:29:52.517606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.032 [2024-10-15 08:29:52.517635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.032 [2024-10-15 08:29:52.521805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.032 [2024-10-15 08:29:52.521842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.032 [2024-10-15 08:29:52.521887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.032 [2024-10-15 08:29:52.526024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.032 [2024-10-15 08:29:52.526061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.032 [2024-10-15 08:29:52.526090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.032 [2024-10-15 08:29:52.530297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.032 [2024-10-15 08:29:52.530336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.032 [2024-10-15 08:29:52.530350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.032 [2024-10-15 08:29:52.534755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.032 [2024-10-15 08:29:52.534793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.032 [2024-10-15 08:29:52.534822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.032 [2024-10-15 08:29:52.539104] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.032 [2024-10-15 08:29:52.539170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.032 [2024-10-15 08:29:52.539201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.032 [2024-10-15 08:29:52.543480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.032 [2024-10-15 08:29:52.543518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.032 [2024-10-15 08:29:52.543548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.032 [2024-10-15 08:29:52.547851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.032 [2024-10-15 08:29:52.547887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.032 [2024-10-15 08:29:52.547915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.032 [2024-10-15 08:29:52.552243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.032 [2024-10-15 08:29:52.552280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.032 [2024-10-15 08:29:52.552309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.032 [2024-10-15 08:29:52.556432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.032 [2024-10-15 08:29:52.556478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.032 [2024-10-15 08:29:52.556507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.032 [2024-10-15 08:29:52.560617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.032 [2024-10-15 08:29:52.560654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.032 [2024-10-15 08:29:52.560683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.032 [2024-10-15 08:29:52.564820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.032 [2024-10-15 08:29:52.564874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.032 [2024-10-15 08:29:52.564887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.032 [2024-10-15 08:29:52.569084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.032 [2024-10-15 08:29:52.569149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.032 [2024-10-15 08:29:52.569181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.032 [2024-10-15 08:29:52.573393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.032 [2024-10-15 08:29:52.573430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.032 [2024-10-15 08:29:52.573460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.032 [2024-10-15 08:29:52.577661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.032 [2024-10-15 08:29:52.577846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.032 [2024-10-15 08:29:52.577880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.032 [2024-10-15 08:29:52.582383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.032 [2024-10-15 08:29:52.582422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.032 [2024-10-15 08:29:52.582436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.032 [2024-10-15 08:29:52.586722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.032 [2024-10-15 08:29:52.586758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.032 [2024-10-15 08:29:52.586787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.032 [2024-10-15 08:29:52.591126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.591192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.591223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.595398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.595432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.595460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.599569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.599604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.599633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.603843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.603878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.603906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.608092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.608158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.608188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.612220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.612255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.612283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.616293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.616328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.616355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.620419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.620455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.620484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.624547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.624583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.624611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.628793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.628830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.628858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.633022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.633059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.633103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.637047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.637100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.637137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.641256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.641291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.641319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.645301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.645352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.645382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.649488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.649524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.649552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.653675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.653712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.653740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.657799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.657835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.657863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.661953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.661989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.662018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.666125] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.666195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.666208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.670334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.670371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.670384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.674442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.674494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.674522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.678640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.678676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.678705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.682967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.683003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.683033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.687310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.687345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.687373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.691451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.691486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.691514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.695696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.695732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.695759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.699865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.699900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.699928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.704152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.704204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.704234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.033 [2024-10-15 08:29:52.708564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.033 [2024-10-15 08:29:52.708603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.033 [2024-10-15 08:29:52.708617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.034 [2024-10-15 08:29:52.712796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.034 [2024-10-15 08:29:52.712831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.034 [2024-10-15 08:29:52.712860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.034 [2024-10-15 08:29:52.717520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.034 [2024-10-15 08:29:52.717557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.034 [2024-10-15 08:29:52.717586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.034 [2024-10-15 08:29:52.722063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.034 [2024-10-15 08:29:52.722101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.034 [2024-10-15 08:29:52.722143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.034 [2024-10-15 08:29:52.726539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.034 [2024-10-15 08:29:52.726733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.034 [2024-10-15 08:29:52.726765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.034 [2024-10-15 08:29:52.731325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.034 [2024-10-15 08:29:52.731364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.034 [2024-10-15 08:29:52.731377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.034 [2024-10-15 08:29:52.735842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.034 [2024-10-15 08:29:52.735880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.034 [2024-10-15 08:29:52.735908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.034 [2024-10-15 08:29:52.740455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.034 [2024-10-15 08:29:52.740491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.034 [2024-10-15 08:29:52.740520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.034 [2024-10-15 08:29:52.744883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.034 [2024-10-15 08:29:52.744939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.034 [2024-10-15 08:29:52.744953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.034 [2024-10-15 08:29:52.749354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.034 [2024-10-15 08:29:52.749409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.034 [2024-10-15 08:29:52.749439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.034 [2024-10-15 08:29:52.753750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.034 [2024-10-15 08:29:52.753786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.034 [2024-10-15 08:29:52.753814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.034 [2024-10-15 08:29:52.758300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.034 [2024-10-15 08:29:52.758348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.034 [2024-10-15 08:29:52.758361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.294 [2024-10-15 08:29:52.762743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.294 [2024-10-15 08:29:52.762778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.294 [2024-10-15 08:29:52.762806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.294 [2024-10-15 08:29:52.767260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.294 [2024-10-15 08:29:52.767310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.767337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.771562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.771598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.771626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.775828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.775864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.775892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.780062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.780099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.780127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.784080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.784149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.784163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.788187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.788222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.788250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.792315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.792351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.792380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.796407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.796457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.796485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.800573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.800609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.800637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.804727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.804762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.804790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.808899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.808937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.808965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.295 7099.00 IOPS, 887.38 MiB/s [2024-10-15T08:29:53.026Z] [2024-10-15 08:29:52.814363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.814591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.814778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.819015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.819243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.819390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.823804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.824002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.824037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.828361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.828399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.828428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.832607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.832644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.832672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.836797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.836833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.836862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.841177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.841214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.841242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.845255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.845292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.845320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.849361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.849397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.849426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.853416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.853486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.853516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.857662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.857698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.857726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.861914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.861951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.861978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.866219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.866273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.866286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.870351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.870390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.870403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.874636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.295 [2024-10-15 08:29:52.874672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.295 [2024-10-15 08:29:52.874700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.295 [2024-10-15 08:29:52.878800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.878837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.878865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.883021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.883057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.883086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.887224] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.887259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.887287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.891338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.891373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.891401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.895528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.895565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.895593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.899725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.899762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.899790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.904080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.904144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.904174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.908274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.908308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.908336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.912499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.912537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.912565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.916605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.916641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.916669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.920762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.920799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.920828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.925010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.925046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.925091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.929237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.929274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.929302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.933573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.933612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.933640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.938006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.938047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.938076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.942370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.942410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.942423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.946791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.946830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.946843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.951164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.951201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.951215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.955536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.955572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.955585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.959872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.959909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.959922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.964229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.964268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.964281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.968597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.968634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.968647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.972945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.972982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.972995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.977143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.977210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.977223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.981385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.981586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.981602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.985856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.985896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.985910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.990457] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.990527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.990556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.994886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.296 [2024-10-15 08:29:52.994924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.296 [2024-10-15 08:29:52.994954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.296 [2024-10-15 08:29:52.999316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.297 [2024-10-15 08:29:52.999365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.297 [2024-10-15 08:29:52.999378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.297 [2024-10-15 08:29:53.003808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.297 [2024-10-15 08:29:53.003845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.297 [2024-10-15 08:29:53.003874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.297 [2024-10-15 08:29:53.008197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.297 [2024-10-15 08:29:53.008233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.297 [2024-10-15 08:29:53.008261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.297 [2024-10-15 08:29:53.012457] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.297 [2024-10-15 08:29:53.012494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.297 [2024-10-15 08:29:53.012523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.297 [2024-10-15 08:29:53.016722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.297 [2024-10-15 08:29:53.016760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.297 [2024-10-15 08:29:53.016788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.297 [2024-10-15 08:29:53.020923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.297 [2024-10-15 08:29:53.020960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.297 [2024-10-15 08:29:53.020989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.025162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.025198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.025228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.029308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.029345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.029373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.033568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.033606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.033634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.038036] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.038076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.038089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.042799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.042835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.042863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.047262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.047301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.047316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.051561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.051599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.051612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.055918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.055958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.055972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.060337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.060374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.060387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.064644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.064683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.064697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.069104] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.069157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.069171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.073464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.073505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.073519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.077991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.078031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.078045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.082329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.082369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.082382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.086606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.086644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.086657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.090988] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.091027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.091040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.095317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.095356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.095369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.099825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.099863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.099892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.104247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.104285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.104298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.108670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.108707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.108736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.113164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.113232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.113247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.117577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.117774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.117793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.122408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.122449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.122462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.126959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.126998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.127027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.131483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.131523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.131537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.135885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.135923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.135952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.140498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.140701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.140735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.145068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.145108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.145143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.149576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.149613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.149642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.153896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.153933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.153962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.158340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.158379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.556 [2024-10-15 08:29:53.158392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.556 [2024-10-15 08:29:53.162825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.556 [2024-10-15 08:29:53.162861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.162890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.167406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.167447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.167475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.171904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.171958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.172004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.176354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.176392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.176422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.180734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.180772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.180801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.185114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.185179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.185210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.189723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.189760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.189789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.193998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.194034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.194062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.198429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.198467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.198495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.202942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.202995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.203026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.207299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.207335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.207364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.211725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.211762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.211792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.216214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.216249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.216278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.220582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.220616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.220629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.224967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.225006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.225036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.229287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.229324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.229353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.233646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.233682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.233711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.238368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.238407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.238420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.242631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.242668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.242697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.247281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.247317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.247346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.251572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.251610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.251639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.256205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.256260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.256275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.260650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.260852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.260885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.265320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.265357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.265370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.269661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.269717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.269747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.273927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.273964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.273993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.278347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.278385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.278398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.557 [2024-10-15 08:29:53.282739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.557 [2024-10-15 08:29:53.282779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.557 [2024-10-15 08:29:53.282793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.287100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.287168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.287182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.291605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.291788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.291821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.296148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.296210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.296223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.300364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.300399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.300428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.304480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.304516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.304545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.308548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.308584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.308613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.312598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.312634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.312662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.316750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.316786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.316814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.320911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.320948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.320975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.325179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.325214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.325243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.329233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.329269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.329297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.333305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.333340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.333369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.337412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.337448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.337476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.341491] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.341529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.341557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.345604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.345641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.345670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.349866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.349902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.349931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.354083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.354179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.354196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.358493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.358546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.358575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.362753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.362789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.362817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.367088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.367152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.367181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.371209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.371244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.371271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.375439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.375477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.375506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.817 [2024-10-15 08:29:53.379927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.817 [2024-10-15 08:29:53.379966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.817 [2024-10-15 08:29:53.379995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.384545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.384583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.384612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.389049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.389131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.389146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.393566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.393770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.393787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.398324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.398364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.398378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.402779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.402816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.402845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.407136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.407191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.407207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.411592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.411629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.411660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.415992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.416029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.416057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.420367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.420405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.420434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.424614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.424650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.424678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.428909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.428946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.428974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.433189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.433251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.433280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.437652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.437690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.437719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.442436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.442476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.442490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.446994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.447033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.447063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.451469] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.451650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.451683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.456136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.456207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.456222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.460563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.460601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.460630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.464959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.464997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.465026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.469313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.469350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.469379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.473557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.473592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.473620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.477749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.477788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.477816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.481914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.481961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.481989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.486101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.486169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.486184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.490188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.490223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.490236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.494256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.494301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.494329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.498362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.498400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.498413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.502539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.502577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.818 [2024-10-15 08:29:53.502591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.818 [2024-10-15 08:29:53.507079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.818 [2024-10-15 08:29:53.507137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.819 [2024-10-15 08:29:53.507152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.819 [2024-10-15 08:29:53.511554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.819 [2024-10-15 08:29:53.511592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.819 [2024-10-15 08:29:53.511606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.819 [2024-10-15 08:29:53.516216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.819 [2024-10-15 08:29:53.516253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.819 [2024-10-15 08:29:53.516266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.819 [2024-10-15 08:29:53.520582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.819 [2024-10-15 08:29:53.520634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.819 [2024-10-15 08:29:53.520664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.819 [2024-10-15 08:29:53.524958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.819 [2024-10-15 08:29:53.525013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.819 [2024-10-15 08:29:53.525026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:51.819 [2024-10-15 08:29:53.529535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.819 [2024-10-15 08:29:53.529588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.819 [2024-10-15 08:29:53.529618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:51.819 [2024-10-15 08:29:53.533945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.819 [2024-10-15 08:29:53.533999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.819 [2024-10-15 08:29:53.534028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:51.819 [2024-10-15 08:29:53.538550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.819 [2024-10-15 08:29:53.538604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.819 [2024-10-15 08:29:53.538633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:51.819 [2024-10-15 08:29:53.542905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:51.819 [2024-10-15 08:29:53.542959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.819 [2024-10-15 08:29:53.542988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.079 [2024-10-15 08:29:53.547312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.079 [2024-10-15 08:29:53.547366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.079 [2024-10-15 08:29:53.547399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:52.079 [2024-10-15 08:29:53.551637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.079 [2024-10-15 08:29:53.551692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.079 [2024-10-15 08:29:53.551720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:52.079 [2024-10-15 08:29:53.556040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.079 [2024-10-15 08:29:53.556095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.079 [2024-10-15 08:29:53.556123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.079 [2024-10-15 08:29:53.560270] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.079 [2024-10-15 08:29:53.560340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.079 [2024-10-15 08:29:53.560370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.079 [2024-10-15 08:29:53.564570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.079 [2024-10-15 08:29:53.564626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.079 [2024-10-15 08:29:53.564655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:52.079 [2024-10-15 08:29:53.568952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.079 [2024-10-15 08:29:53.569007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.079 [2024-10-15 08:29:53.569035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:52.079 [2024-10-15 08:29:53.573463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.079 [2024-10-15 08:29:53.573522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.079 [2024-10-15 08:29:53.573550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.079 [2024-10-15 08:29:53.577935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.079 [2024-10-15 08:29:53.578009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.079 [2024-10-15 08:29:53.578022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.079 [2024-10-15 08:29:53.582465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.079 [2024-10-15 08:29:53.582506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.079 [2024-10-15 08:29:53.582519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:52.079 [2024-10-15 08:29:53.586732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.079 [2024-10-15 08:29:53.586773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.079 [2024-10-15 08:29:53.586786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:52.079 [2024-10-15 08:29:53.591055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.079 [2024-10-15 08:29:53.591096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.079 [2024-10-15 08:29:53.591109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.079 [2024-10-15 08:29:53.595497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.079 [2024-10-15 08:29:53.595565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.079 [2024-10-15 08:29:53.595595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.079 [2024-10-15 08:29:53.600012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.079 [2024-10-15 08:29:53.600052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.079 [2024-10-15 08:29:53.600065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:52.079 [2024-10-15 08:29:53.604448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.079 [2024-10-15 08:29:53.604487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.079 [2024-10-15 08:29:53.604500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:52.079 [2024-10-15 08:29:53.608881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.079 [2024-10-15 08:29:53.608921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.079 [2024-10-15 08:29:53.608934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.079 [2024-10-15 08:29:53.613406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.079 [2024-10-15 08:29:53.613458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.079 [2024-10-15 08:29:53.613487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.079 [2024-10-15 08:29:53.617841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.079 [2024-10-15 08:29:53.617895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.079 [2024-10-15 08:29:53.617923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:52.079 [2024-10-15 08:29:53.622352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.079 [2024-10-15 08:29:53.622391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.079 [2024-10-15 08:29:53.622405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:52.079 [2024-10-15 08:29:53.626835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.079 [2024-10-15 08:29:53.626889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.079 [2024-10-15 08:29:53.626918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.079 [2024-10-15 08:29:53.631259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.079 [2024-10-15 08:29:53.631343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.079 [2024-10-15 08:29:53.631372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.079 [2024-10-15 08:29:53.635743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.079 [2024-10-15 08:29:53.635798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.079 [2024-10-15 08:29:53.635828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:52.079 [2024-10-15 08:29:53.640228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.079 [2024-10-15 08:29:53.640266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.079 [2024-10-15 08:29:53.640279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.644631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.644683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.644713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.648911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.648966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.648994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.653295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.653349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.653378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.657613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.657668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.657696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.662050] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.662105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.662129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.666286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.666328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.666342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.670554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.670619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.670647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.674904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.674959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.674988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.679433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.679475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.679488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.683727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.683783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.683812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.688155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.688210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.688223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.692454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.692524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.692553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.696886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.696941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.696971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.701240] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.701291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.701321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.705607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.705662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.705691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.710112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.710168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.710182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.714513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.714564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.714593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.718884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.718935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.718963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.723229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.723282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.723311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.727566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.727621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.727650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.731799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.731854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.731882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.735991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.736046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.736075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.740428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.740497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.740525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.744679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.744732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.744761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.748976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.749030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.749058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.753468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.753536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.753565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.757747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.757798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.080 [2024-10-15 08:29:53.757826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:52.080 [2024-10-15 08:29:53.761850] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.080 [2024-10-15 08:29:53.761901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.081 [2024-10-15 08:29:53.761929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:52.081 [2024-10-15 08:29:53.766035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.081 [2024-10-15 08:29:53.766086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.081 [2024-10-15 08:29:53.766115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.081 [2024-10-15 08:29:53.770147] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.081 [2024-10-15 08:29:53.770207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.081 [2024-10-15 08:29:53.770220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.081 [2024-10-15 08:29:53.774407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.081 [2024-10-15 08:29:53.774464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.081 [2024-10-15 08:29:53.774476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:52.081 [2024-10-15 08:29:53.778845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.081 [2024-10-15 08:29:53.778899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.081 [2024-10-15 08:29:53.778928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:52.081 [2024-10-15 08:29:53.783243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.081 [2024-10-15 08:29:53.783294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.081 [2024-10-15 08:29:53.783322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.081 [2024-10-15 08:29:53.787580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.081 [2024-10-15 08:29:53.787633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.081 [2024-10-15 08:29:53.787663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.081 [2024-10-15 08:29:53.791920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.081 [2024-10-15 08:29:53.791976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.081 [2024-10-15 08:29:53.791989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:52.081 [2024-10-15 08:29:53.796276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.081 [2024-10-15 08:29:53.796317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.081 [2024-10-15 08:29:53.796331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:52.081 [2024-10-15 08:29:53.800452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.081 [2024-10-15 08:29:53.800518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.081 [2024-10-15 08:29:53.800546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:52.081 [2024-10-15 08:29:53.804659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.081 [2024-10-15 08:29:53.804712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.081 [2024-10-15 08:29:53.804740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:52.340 [2024-10-15 08:29:53.808810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x556190) 00:18:52.340 [2024-10-15 08:29:53.808860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.340 [2024-10-15 08:29:53.808888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:52.340 7099.00 IOPS, 887.38 MiB/s 00:18:52.340 Latency(us) 00:18:52.340 [2024-10-15T08:29:54.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.340 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:52.340 nvme0n1 : 2.00 7096.61 887.08 0.00 0.00 2251.26 1765.00 7357.91 00:18:52.340 [2024-10-15T08:29:54.071Z] =================================================================================================================== 00:18:52.340 [2024-10-15T08:29:54.071Z] Total : 7096.61 887.08 0.00 0.00 2251.26 1765.00 7357.91 00:18:52.340 { 00:18:52.340 "results": [ 00:18:52.340 { 00:18:52.340 "job": "nvme0n1", 00:18:52.340 "core_mask": "0x2", 00:18:52.340 "workload": "randread", 00:18:52.340 "status": "finished", 00:18:52.340 "queue_depth": 16, 00:18:52.340 "io_size": 131072, 00:18:52.340 "runtime": 2.002929, 00:18:52.340 "iops": 7096.607019020645, 00:18:52.340 "mibps": 887.0758773775806, 00:18:52.340 "io_failed": 0, 00:18:52.340 "io_timeout": 0, 00:18:52.340 "avg_latency_us": 2251.259618557888, 00:18:52.340 "min_latency_us": 1765.0036363636364, 00:18:52.340 "max_latency_us": 7357.905454545455 00:18:52.340 } 00:18:52.340 ], 00:18:52.340 "core_count": 1 00:18:52.340 } 00:18:52.340 08:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:52.340 08:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:52.340 08:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:52.340 08:29:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:52.340 | .driver_specific 00:18:52.340 | .nvme_error 00:18:52.340 | .status_code 00:18:52.340 | .command_transient_transport_error' 00:18:52.599 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 458 > 0 )) 00:18:52.599 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80653 00:18:52.599 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80653 ']' 00:18:52.599 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80653 00:18:52.599 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:52.599 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:52.599 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80653 00:18:52.599 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:52.599 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:52.599 killing process with pid 80653 00:18:52.599 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80653' 00:18:52.599 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80653 00:18:52.599 Received shutdown signal, test time was about 2.000000 seconds 00:18:52.599 00:18:52.599 Latency(us) 00:18:52.599 [2024-10-15T08:29:54.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.599 [2024-10-15T08:29:54.330Z] =================================================================================================================== 00:18:52.599 [2024-10-15T08:29:54.330Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:52.599 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80653 00:18:52.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:52.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:52.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:52.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:52.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:52.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80709 00:18:52.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:52.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80709 /var/tmp/bperf.sock 00:18:52.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80709 ']' 00:18:52.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:52.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:52.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:52.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:52.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:52.858 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:52.858 [2024-10-15 08:29:54.487774] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:18:52.858 [2024-10-15 08:29:54.488793] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80709 ] 00:18:53.117 [2024-10-15 08:29:54.640985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.117 [2024-10-15 08:29:54.712974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.117 [2024-10-15 08:29:54.785095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:53.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:53.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:53.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:53.377 08:29:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:53.635 08:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:53.635 08:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.635 08:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:53.635 08:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.635 08:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:53.636 08:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:53.894 nvme0n1 00:18:53.894 08:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:53.894 08:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.894 08:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:53.895 08:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.895 08:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:53.895 08:29:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:53.895 Running I/O for 2 seconds... 00:18:54.154 [2024-10-15 08:29:55.653302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166fef90 00:18:54.154 [2024-10-15 08:29:55.656034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.154 [2024-10-15 08:29:55.656092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:54.154 [2024-10-15 08:29:55.670371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166feb58 00:18:54.154 [2024-10-15 08:29:55.673018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.154 [2024-10-15 08:29:55.673067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:54.154 [2024-10-15 08:29:55.686521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166fe2e8 00:18:54.154 [2024-10-15 08:29:55.689059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.154 [2024-10-15 08:29:55.689130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:54.154 [2024-10-15 08:29:55.703077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166fda78 00:18:54.154 [2024-10-15 08:29:55.705544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.154 [2024-10-15 08:29:55.705577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:54.154 [2024-10-15 08:29:55.718900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166fd208 00:18:54.154 [2024-10-15 08:29:55.721398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.154 [2024-10-15 08:29:55.721431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:54.154 [2024-10-15 08:29:55.734991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166fc998 00:18:54.154 [2024-10-15 08:29:55.737521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.154 [2024-10-15 08:29:55.737554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:54.154 [2024-10-15 08:29:55.751060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166fc128 00:18:54.154 [2024-10-15 08:29:55.753479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.154 [2024-10-15 08:29:55.753529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:54.154 [2024-10-15 08:29:55.766875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166fb8b8 00:18:54.154 [2024-10-15 08:29:55.769333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.154 [2024-10-15 08:29:55.769365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:54.154 [2024-10-15 08:29:55.782812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166fb048 00:18:54.154 [2024-10-15 08:29:55.785196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.154 [2024-10-15 08:29:55.785245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:54.154 [2024-10-15 08:29:55.798027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166fa7d8 00:18:54.154 [2024-10-15 08:29:55.800360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.154 [2024-10-15 08:29:55.800393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:54.154 [2024-10-15 08:29:55.812936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f9f68 00:18:54.154 [2024-10-15 08:29:55.815294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.154 [2024-10-15 08:29:55.815353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:54.154 [2024-10-15 08:29:55.827849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f96f8 00:18:54.154 [2024-10-15 08:29:55.830107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.154 [2024-10-15 08:29:55.830183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:54.154 [2024-10-15 08:29:55.842674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f8e88 00:18:54.154 [2024-10-15 08:29:55.844943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.154 [2024-10-15 08:29:55.844992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:54.154 [2024-10-15 08:29:55.857489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f8618 00:18:54.154 [2024-10-15 08:29:55.859700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.154 [2024-10-15 08:29:55.859752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:54.154 [2024-10-15 08:29:55.872386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f7da8 00:18:54.154 [2024-10-15 08:29:55.874608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.154 [2024-10-15 08:29:55.874646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:54.413 [2024-10-15 08:29:55.887193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f7538 00:18:54.413 [2024-10-15 08:29:55.889334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.413 [2024-10-15 08:29:55.889384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:54.413 [2024-10-15 08:29:55.902005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f6cc8 00:18:54.413 [2024-10-15 08:29:55.904210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.413 [2024-10-15 08:29:55.904243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:54.413 [2024-10-15 08:29:55.917465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f6458 00:18:54.413 [2024-10-15 08:29:55.919673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.413 [2024-10-15 08:29:55.919726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:54.413 [2024-10-15 08:29:55.933284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f5be8 00:18:54.413 [2024-10-15 08:29:55.935499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.413 [2024-10-15 08:29:55.935538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:54.413 [2024-10-15 08:29:55.949389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f5378 00:18:54.413 [2024-10-15 08:29:55.951646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.413 [2024-10-15 08:29:55.951682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:54.413 [2024-10-15 08:29:55.964934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f4b08 00:18:54.413 [2024-10-15 08:29:55.967083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.413 [2024-10-15 08:29:55.967144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:54.413 [2024-10-15 08:29:55.980176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f4298 00:18:54.413 [2024-10-15 08:29:55.982271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.413 [2024-10-15 08:29:55.982307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:54.414 [2024-10-15 08:29:55.995214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f3a28 00:18:54.414 [2024-10-15 08:29:55.997297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.414 [2024-10-15 08:29:55.997330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:54.414 [2024-10-15 08:29:56.011118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f31b8 00:18:54.414 [2024-10-15 08:29:56.013357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.414 [2024-10-15 08:29:56.013393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:54.414 [2024-10-15 08:29:56.027476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f2948 00:18:54.414 [2024-10-15 08:29:56.029581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.414 [2024-10-15 08:29:56.029615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:54.414 [2024-10-15 08:29:56.043854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f20d8 00:18:54.414 [2024-10-15 08:29:56.045914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.414 [2024-10-15 08:29:56.045948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:54.414 [2024-10-15 08:29:56.059858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f1868 00:18:54.414 [2024-10-15 08:29:56.061886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.414 [2024-10-15 08:29:56.061922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:54.414 [2024-10-15 08:29:56.075810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f0ff8 00:18:54.414 [2024-10-15 08:29:56.077889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.414 [2024-10-15 08:29:56.077922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:54.414 [2024-10-15 08:29:56.091545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f0788 00:18:54.414 [2024-10-15 08:29:56.093579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.414 [2024-10-15 08:29:56.093629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:54.414 [2024-10-15 08:29:56.107197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166eff18 00:18:54.414 [2024-10-15 08:29:56.109158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.414 [2024-10-15 08:29:56.109193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:54.414 [2024-10-15 08:29:56.122790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166ef6a8 00:18:54.414 [2024-10-15 08:29:56.124780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.414 [2024-10-15 08:29:56.124827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:54.414 [2024-10-15 08:29:56.138207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166eee38 00:18:54.414 [2024-10-15 08:29:56.140095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.414 [2024-10-15 08:29:56.140155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:54.672 [2024-10-15 08:29:56.153481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166ee5c8 00:18:54.672 [2024-10-15 08:29:56.155396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.672 [2024-10-15 08:29:56.155451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:54.672 [2024-10-15 08:29:56.168814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166edd58 00:18:54.672 [2024-10-15 08:29:56.170799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.673 [2024-10-15 08:29:56.170851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:54.673 [2024-10-15 08:29:56.184442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166ed4e8 00:18:54.673 [2024-10-15 08:29:56.186353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.673 [2024-10-15 08:29:56.186394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:54.673 [2024-10-15 08:29:56.199900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166ecc78 00:18:54.673 [2024-10-15 08:29:56.201763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.673 [2024-10-15 08:29:56.201810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:54.673 [2024-10-15 08:29:56.216315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166ec408 00:18:54.673 [2024-10-15 08:29:56.218166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.673 [2024-10-15 08:29:56.218200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:54.673 [2024-10-15 08:29:56.232659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166ebb98 00:18:54.673 [2024-10-15 08:29:56.234535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.673 [2024-10-15 08:29:56.234602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:54.673 [2024-10-15 08:29:56.248490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166eb328 00:18:54.673 [2024-10-15 08:29:56.250412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.673 [2024-10-15 08:29:56.250451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:54.673 [2024-10-15 08:29:56.264574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166eaab8 00:18:54.673 [2024-10-15 08:29:56.266423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.673 [2024-10-15 08:29:56.266461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:54.673 [2024-10-15 08:29:56.279598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166ea248 00:18:54.673 [2024-10-15 08:29:56.281465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.673 [2024-10-15 08:29:56.281514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:54.673 [2024-10-15 08:29:56.295732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e99d8 00:18:54.673 [2024-10-15 08:29:56.297496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.673 [2024-10-15 08:29:56.297530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:54.673 [2024-10-15 08:29:56.311808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e9168 00:18:54.673 [2024-10-15 08:29:56.313574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.673 [2024-10-15 08:29:56.313608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:54.673 [2024-10-15 08:29:56.328245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e88f8 00:18:54.673 [2024-10-15 08:29:56.329978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.673 [2024-10-15 08:29:56.330012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:54.673 [2024-10-15 08:29:56.344495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e8088 00:18:54.673 [2024-10-15 08:29:56.346261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.673 [2024-10-15 08:29:56.346298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:54.673 [2024-10-15 08:29:56.360091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e7818 00:18:54.673 [2024-10-15 08:29:56.361779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.673 [2024-10-15 08:29:56.361827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:54.673 [2024-10-15 08:29:56.376067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e6fa8 00:18:54.673 [2024-10-15 08:29:56.377812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.673 [2024-10-15 08:29:56.377860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:54.673 [2024-10-15 08:29:56.392409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e6738 00:18:54.673 [2024-10-15 08:29:56.394097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.673 [2024-10-15 08:29:56.394166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:54.932 [2024-10-15 08:29:56.407774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e5ec8 00:18:54.932 [2024-10-15 08:29:56.409400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.932 [2024-10-15 08:29:56.409433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:54.932 [2024-10-15 08:29:56.424000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e5658 00:18:54.932 [2024-10-15 08:29:56.425702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.932 [2024-10-15 08:29:56.425750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:54.932 [2024-10-15 08:29:56.440261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e4de8 00:18:54.932 [2024-10-15 08:29:56.441811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.932 [2024-10-15 08:29:56.441861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:54.932 [2024-10-15 08:29:56.456315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e4578 00:18:54.932 [2024-10-15 08:29:56.457854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.932 [2024-10-15 08:29:56.457903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:54.932 [2024-10-15 08:29:56.472612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e3d08 00:18:54.932 [2024-10-15 08:29:56.474203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.932 [2024-10-15 08:29:56.474239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:54.932 [2024-10-15 08:29:56.488879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e3498 00:18:54.932 [2024-10-15 08:29:56.490427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.932 [2024-10-15 08:29:56.490478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:54.932 [2024-10-15 08:29:56.505176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e2c28 00:18:54.932 [2024-10-15 08:29:56.506714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.932 [2024-10-15 08:29:56.506754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:54.932 [2024-10-15 08:29:56.521587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e23b8 00:18:54.932 [2024-10-15 08:29:56.523111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.932 [2024-10-15 08:29:56.523162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:54.932 [2024-10-15 08:29:56.537855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e1b48 00:18:54.932 [2024-10-15 08:29:56.539399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.932 [2024-10-15 08:29:56.539435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:54.932 [2024-10-15 08:29:56.554216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e12d8 00:18:54.932 [2024-10-15 08:29:56.555694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.932 [2024-10-15 08:29:56.555742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:54.932 [2024-10-15 08:29:56.570226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e0a68 00:18:54.932 [2024-10-15 08:29:56.571673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.932 [2024-10-15 08:29:56.571721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:54.932 [2024-10-15 08:29:56.585770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e01f8 00:18:54.932 [2024-10-15 08:29:56.587189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.932 [2024-10-15 08:29:56.587240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:54.932 [2024-10-15 08:29:56.602104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166df988 00:18:54.932 [2024-10-15 08:29:56.603534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.932 [2024-10-15 08:29:56.603585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:54.932 [2024-10-15 08:29:56.618347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166df118 00:18:54.932 [2024-10-15 08:29:56.619711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.932 [2024-10-15 08:29:56.619760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:54.932 15814.00 IOPS, 61.77 MiB/s [2024-10-15T08:29:56.663Z] [2024-10-15 08:29:56.633963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166de8a8 00:18:54.932 [2024-10-15 08:29:56.635352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.932 [2024-10-15 08:29:56.635386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:54.932 [2024-10-15 08:29:56.650114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166de038 00:18:54.932 [2024-10-15 08:29:56.651466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:54.932 [2024-10-15 08:29:56.651516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:55.191 [2024-10-15 08:29:56.672757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166de038 00:18:55.191 [2024-10-15 08:29:56.675354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.191 [2024-10-15 08:29:56.675417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.191 [2024-10-15 08:29:56.688175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166de8a8 00:18:55.191 [2024-10-15 08:29:56.690772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.191 [2024-10-15 08:29:56.690821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:55.191 [2024-10-15 08:29:56.704789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166df118 00:18:55.191 [2024-10-15 08:29:56.707357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.191 [2024-10-15 08:29:56.707397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:55.191 [2024-10-15 08:29:56.721069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166df988 00:18:55.191 [2024-10-15 08:29:56.723698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.191 [2024-10-15 08:29:56.723752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:55.191 [2024-10-15 08:29:56.737024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e01f8 00:18:55.191 [2024-10-15 08:29:56.739580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.191 [2024-10-15 08:29:56.739630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:55.191 [2024-10-15 08:29:56.752803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e0a68 00:18:55.191 [2024-10-15 08:29:56.755402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.191 [2024-10-15 08:29:56.755454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:55.191 [2024-10-15 08:29:56.769538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e12d8 00:18:55.191 [2024-10-15 08:29:56.772020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.191 [2024-10-15 08:29:56.772089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:55.191 [2024-10-15 08:29:56.785654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e1b48 00:18:55.191 [2024-10-15 08:29:56.787967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.192 [2024-10-15 08:29:56.788019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:55.192 [2024-10-15 08:29:56.801215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e23b8 00:18:55.192 [2024-10-15 08:29:56.803612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.192 [2024-10-15 08:29:56.803666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:55.192 [2024-10-15 08:29:56.817620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e2c28 00:18:55.192 [2024-10-15 08:29:56.820057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.192 [2024-10-15 08:29:56.820139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:55.192 [2024-10-15 08:29:56.833575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e3498 00:18:55.192 [2024-10-15 08:29:56.835845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.192 [2024-10-15 08:29:56.835898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:55.192 [2024-10-15 08:29:56.849241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e3d08 00:18:55.192 [2024-10-15 08:29:56.851599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.192 [2024-10-15 08:29:56.851651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:55.192 [2024-10-15 08:29:56.865539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e4578 00:18:55.192 [2024-10-15 08:29:56.867891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.192 [2024-10-15 08:29:56.867942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:55.192 [2024-10-15 08:29:56.881542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e4de8 00:18:55.192 [2024-10-15 08:29:56.883843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.192 [2024-10-15 08:29:56.883894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:55.192 [2024-10-15 08:29:56.896883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e5658 00:18:55.192 [2024-10-15 08:29:56.899280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.192 [2024-10-15 08:29:56.899316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:55.192 [2024-10-15 08:29:56.912849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e5ec8 00:18:55.192 [2024-10-15 08:29:56.915172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.192 [2024-10-15 08:29:56.915214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:55.450 [2024-10-15 08:29:56.928694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e6738 00:18:55.450 [2024-10-15 08:29:56.930943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.450 [2024-10-15 08:29:56.930997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:55.450 [2024-10-15 08:29:56.944743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e6fa8 00:18:55.450 [2024-10-15 08:29:56.947041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.450 [2024-10-15 08:29:56.947093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:55.450 [2024-10-15 08:29:56.961100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e7818 00:18:55.450 [2024-10-15 08:29:56.963397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.450 [2024-10-15 08:29:56.963448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:55.450 [2024-10-15 08:29:56.977097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e8088 00:18:55.450 [2024-10-15 08:29:56.979356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.450 [2024-10-15 08:29:56.979395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:55.450 [2024-10-15 08:29:56.993537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e88f8 00:18:55.450 [2024-10-15 08:29:56.995740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.450 [2024-10-15 08:29:56.995794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:55.450 [2024-10-15 08:29:57.009980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e9168 00:18:55.450 [2024-10-15 08:29:57.012231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.450 [2024-10-15 08:29:57.012264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:55.450 [2024-10-15 08:29:57.026224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166e99d8 00:18:55.450 [2024-10-15 08:29:57.028582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.450 [2024-10-15 08:29:57.028631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:55.450 [2024-10-15 08:29:57.042352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166ea248 00:18:55.450 [2024-10-15 08:29:57.044538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.451 [2024-10-15 08:29:57.044586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:55.451 [2024-10-15 08:29:57.058633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166eaab8 00:18:55.451 [2024-10-15 08:29:57.060682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.451 [2024-10-15 08:29:57.060729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:55.451 [2024-10-15 08:29:57.074324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166eb328 00:18:55.451 [2024-10-15 08:29:57.076417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.451 [2024-10-15 08:29:57.076467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:55.451 [2024-10-15 08:29:57.090430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166ebb98 00:18:55.451 [2024-10-15 08:29:57.092560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.451 [2024-10-15 08:29:57.092608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:55.451 [2024-10-15 08:29:57.106054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166ec408 00:18:55.451 [2024-10-15 08:29:57.108103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.451 [2024-10-15 08:29:57.108178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:55.451 [2024-10-15 08:29:57.121070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166ecc78 00:18:55.451 [2024-10-15 08:29:57.123086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.451 [2024-10-15 08:29:57.123145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:55.451 [2024-10-15 08:29:57.137412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166ed4e8 00:18:55.451 [2024-10-15 08:29:57.139428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.451 [2024-10-15 08:29:57.139464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:55.451 [2024-10-15 08:29:57.153619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166edd58 00:18:55.451 [2024-10-15 08:29:57.155592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.451 [2024-10-15 08:29:57.155639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:55.451 [2024-10-15 08:29:57.169683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166ee5c8 00:18:55.451 [2024-10-15 08:29:57.171659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.451 [2024-10-15 08:29:57.171712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:55.709 [2024-10-15 08:29:57.185096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166eee38 00:18:55.709 [2024-10-15 08:29:57.187143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.709 [2024-10-15 08:29:57.187190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:55.709 [2024-10-15 08:29:57.201382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166ef6a8 00:18:55.710 [2024-10-15 08:29:57.203282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.710 [2024-10-15 08:29:57.203326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:55.710 [2024-10-15 08:29:57.217641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166eff18 00:18:55.710 [2024-10-15 08:29:57.219566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.710 [2024-10-15 08:29:57.219619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:55.710 [2024-10-15 08:29:57.234027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f0788 00:18:55.710 [2024-10-15 08:29:57.236061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.710 [2024-10-15 08:29:57.236100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:55.710 [2024-10-15 08:29:57.250546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f0ff8 00:18:55.710 [2024-10-15 08:29:57.252411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.710 [2024-10-15 08:29:57.252447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:55.710 [2024-10-15 08:29:57.266865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f1868 00:18:55.710 [2024-10-15 08:29:57.268727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.710 [2024-10-15 08:29:57.268774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:55.710 [2024-10-15 08:29:57.283174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f20d8 00:18:55.710 [2024-10-15 08:29:57.284953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.710 [2024-10-15 08:29:57.284989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:55.710 [2024-10-15 08:29:57.299343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f2948 00:18:55.710 [2024-10-15 08:29:57.301103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.710 [2024-10-15 08:29:57.301146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:55.710 [2024-10-15 08:29:57.315156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f31b8 00:18:55.710 [2024-10-15 08:29:57.316904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.710 [2024-10-15 08:29:57.316954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:55.710 [2024-10-15 08:29:57.331122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f3a28 00:18:55.710 [2024-10-15 08:29:57.332886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.710 [2024-10-15 08:29:57.332936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:55.710 [2024-10-15 08:29:57.347230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f4298 00:18:55.710 [2024-10-15 08:29:57.348966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.710 [2024-10-15 08:29:57.349016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:55.710 [2024-10-15 08:29:57.362948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f4b08 00:18:55.710 [2024-10-15 08:29:57.364712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.710 [2024-10-15 08:29:57.364759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:55.710 [2024-10-15 08:29:57.378761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f5378 00:18:55.710 [2024-10-15 08:29:57.380547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.710 [2024-10-15 08:29:57.380596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:55.710 [2024-10-15 08:29:57.394984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f5be8 00:18:55.710 [2024-10-15 08:29:57.396740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.710 [2024-10-15 08:29:57.396788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:55.710 [2024-10-15 08:29:57.410800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f6458 00:18:55.710 [2024-10-15 08:29:57.412489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.710 [2024-10-15 08:29:57.412538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:55.710 [2024-10-15 08:29:57.425982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f6cc8 00:18:55.710 [2024-10-15 08:29:57.427639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.710 [2024-10-15 08:29:57.427703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:55.969 [2024-10-15 08:29:57.442041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f7538 00:18:55.969 [2024-10-15 08:29:57.443701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.969 [2024-10-15 08:29:57.443765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:55.969 [2024-10-15 08:29:57.457284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f7da8 00:18:55.969 [2024-10-15 08:29:57.458940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.969 [2024-10-15 08:29:57.458989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:55.969 [2024-10-15 08:29:57.472247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f8618 00:18:55.969 [2024-10-15 08:29:57.473826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.969 [2024-10-15 08:29:57.473874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:55.969 [2024-10-15 08:29:57.488605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f8e88 00:18:55.969 [2024-10-15 08:29:57.490175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.969 [2024-10-15 08:29:57.490211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:55.969 [2024-10-15 08:29:57.504734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f96f8 00:18:55.969 [2024-10-15 08:29:57.506285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.969 [2024-10-15 08:29:57.506320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:55.969 [2024-10-15 08:29:57.520424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166f9f68 00:18:55.969 [2024-10-15 08:29:57.521936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.969 [2024-10-15 08:29:57.521971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:55.969 [2024-10-15 08:29:57.536756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166fa7d8 00:18:55.969 [2024-10-15 08:29:57.538299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.969 [2024-10-15 08:29:57.538351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:55.969 [2024-10-15 08:29:57.553183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166fb048 00:18:55.969 [2024-10-15 08:29:57.554764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.969 [2024-10-15 08:29:57.554801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:55.969 [2024-10-15 08:29:57.568370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166fb8b8 00:18:55.969 [2024-10-15 08:29:57.569784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.969 [2024-10-15 08:29:57.569817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:55.969 [2024-10-15 08:29:57.583975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166fc128 00:18:55.969 [2024-10-15 08:29:57.585473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.969 [2024-10-15 08:29:57.585506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:55.969 [2024-10-15 08:29:57.599590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166fc998 00:18:55.969 [2024-10-15 08:29:57.601024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.969 [2024-10-15 08:29:57.601073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:55.969 [2024-10-15 08:29:57.614384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166fd208 00:18:55.969 [2024-10-15 08:29:57.615816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.969 [2024-10-15 08:29:57.615864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:55.969 15877.00 IOPS, 62.02 MiB/s [2024-10-15T08:29:57.700Z] [2024-10-15 08:29:57.631564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c93230) with pdu=0x2000166fda78 00:18:55.969 [2024-10-15 08:29:57.632921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.969 [2024-10-15 08:29:57.632953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:55.969 00:18:55.969 Latency(us) 00:18:55.969 [2024-10-15T08:29:57.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.969 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:55.969 nvme0n1 : 2.01 15869.50 61.99 0.00 0.00 8058.10 7060.01 31933.91 00:18:55.969 [2024-10-15T08:29:57.700Z] =================================================================================================================== 00:18:55.969 [2024-10-15T08:29:57.701Z] Total : 15869.50 61.99 0.00 0.00 8058.10 7060.01 31933.91 00:18:55.970 { 00:18:55.970 "results": [ 00:18:55.970 { 00:18:55.970 "job": "nvme0n1", 00:18:55.970 "core_mask": "0x2", 00:18:55.970 "workload": "randwrite", 00:18:55.970 "status": "finished", 00:18:55.970 "queue_depth": 128, 00:18:55.970 "io_size": 4096, 00:18:55.970 "runtime": 2.009011, 00:18:55.970 "iops": 15869.499967894651, 00:18:55.970 "mibps": 61.99023424958848, 00:18:55.970 "io_failed": 0, 00:18:55.970 "io_timeout": 0, 00:18:55.970 "avg_latency_us": 8058.102325050899, 00:18:55.970 "min_latency_us": 7060.014545454545, 00:18:55.970 "max_latency_us": 31933.905454545453 00:18:55.970 } 00:18:55.970 ], 00:18:55.970 "core_count": 1 00:18:55.970 } 00:18:55.970 08:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:55.970 08:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:55.970 08:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:55.970 08:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:55.970 | .driver_specific 00:18:55.970 | .nvme_error 00:18:55.970 | .status_code 00:18:55.970 | .command_transient_transport_error' 00:18:56.537 08:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 125 > 0 )) 00:18:56.537 08:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80709 00:18:56.537 08:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80709 ']' 00:18:56.537 08:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80709 00:18:56.537 08:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:56.537 08:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:56.537 08:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80709 00:18:56.537 08:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:56.537 08:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:56.537 08:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80709' 00:18:56.537 killing process with pid 80709 00:18:56.537 08:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80709 00:18:56.537 Received shutdown signal, test time was about 2.000000 seconds 00:18:56.537 00:18:56.537 Latency(us) 00:18:56.537 [2024-10-15T08:29:58.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.537 [2024-10-15T08:29:58.268Z] =================================================================================================================== 00:18:56.537 [2024-10-15T08:29:58.268Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:56.537 08:29:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80709 00:18:56.537 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:56.537 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:56.537 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:56.537 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:56.537 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:56.537 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80758 00:18:56.537 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:56.537 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80758 /var/tmp/bperf.sock 00:18:56.537 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 80758 ']' 00:18:56.537 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:56.537 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:56.537 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:56.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:56.537 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:56.537 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:56.796 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:56.796 Zero copy mechanism will not be used. 00:18:56.796 [2024-10-15 08:29:58.313758] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:18:56.796 [2024-10-15 08:29:58.313884] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80758 ] 00:18:56.796 [2024-10-15 08:29:58.449106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.796 [2024-10-15 08:29:58.523741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.056 [2024-10-15 08:29:58.597437] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:57.056 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:57.056 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:57.056 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:57.056 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:57.315 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:57.315 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.315 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:57.315 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.315 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:57.315 08:29:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:57.573 nvme0n1 00:18:57.573 08:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:57.573 08:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.573 08:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:57.832 08:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.832 08:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:57.832 08:29:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:57.832 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:57.832 Zero copy mechanism will not be used. 00:18:57.832 Running I/O for 2 seconds... 00:18:57.832 [2024-10-15 08:29:59.422256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.832 [2024-10-15 08:29:59.422626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.832 [2024-10-15 08:29:59.422673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.832 [2024-10-15 08:29:59.427671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.428018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.428059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.432839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.433192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.433236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.437930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.438288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.438323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.442982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.443458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.443490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.448156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.448444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.448471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.452979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.453330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.453363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.458001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.458362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.458395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.463255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.463566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.463592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.468497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.468785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.468811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.473629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.473915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.473942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.478797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.479310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.479343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.484087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.484440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.484470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.489089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.489418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.489446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.493956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.494308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.494333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.498913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.499441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.499474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.504524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.504813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.504839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.509898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.510275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.510314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.515238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.515570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.515596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.520363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.520645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.520670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.525811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.526143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.526188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.531121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.531588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.531621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.536635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.536993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.537022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.542318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.542624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.542653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.547654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.547932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.547959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.552788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.553105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.553152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:57.833 [2024-10-15 08:29:59.557965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:57.833 [2024-10-15 08:29:59.558337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.833 [2024-10-15 08:29:59.558369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.563310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.563655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.563681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.568546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.568860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.568889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.573745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.574071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.574100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.579106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.579561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.579594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.584510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.584828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.584856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.589834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.590202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.590231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.595101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.595573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.595597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.600532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.600820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.600847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.605729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.606047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.606077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.610988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.611489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.611523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.616658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.616992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.617021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.621992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.622315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.622352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.627278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.627610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.627637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.632613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.632924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.632953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.637831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.638180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.638209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.643119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.643576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.643609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.648624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.648934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.648962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.653747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.654071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.654099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.658850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.659345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.659377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.664125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.664431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.664458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.669128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.669421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.669448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.674050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.674422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.674445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.679194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.679660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.679692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.684477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.684785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.684812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.689548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.689847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.094 [2024-10-15 08:29:59.689874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.094 [2024-10-15 08:29:59.694623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.094 [2024-10-15 08:29:59.695097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.695141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.700121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.700475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.700503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.705262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.705558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.705584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.710195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.710508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.710534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.715229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.715553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.715578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.720312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.720630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.720657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.725505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.725812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.725839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.730549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.730844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.730871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.735696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.736171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.736220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.741057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.741425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.741456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.746189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.746489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.746546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.751370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.751682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.751708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.756632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.756929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.756956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.761723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.762027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.762054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.766767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.767244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.767276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.772180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.772524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.772557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.777353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.777682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.777708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.782534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.782831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.782858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.787646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.787943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.787972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.792604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.792890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.792916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.797490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.797776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.797802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.802427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.802750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.802777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.807466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.807764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.807790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.812547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.812830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.812855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.817539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.817821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.817847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.095 [2024-10-15 08:29:59.822377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.095 [2024-10-15 08:29:59.822694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.095 [2024-10-15 08:29:59.822721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.355 [2024-10-15 08:29:59.827232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.355 [2024-10-15 08:29:59.827558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.355 [2024-10-15 08:29:59.827583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.355 [2024-10-15 08:29:59.832104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.355 [2024-10-15 08:29:59.832440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.355 [2024-10-15 08:29:59.832471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.355 [2024-10-15 08:29:59.837014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.355 [2024-10-15 08:29:59.837328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.355 [2024-10-15 08:29:59.837355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.355 [2024-10-15 08:29:59.841716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.355 [2024-10-15 08:29:59.841995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.355 [2024-10-15 08:29:59.842022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.355 [2024-10-15 08:29:59.846679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.355 [2024-10-15 08:29:59.847145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.355 [2024-10-15 08:29:59.847187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.355 [2024-10-15 08:29:59.851678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.355 [2024-10-15 08:29:59.851959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.355 [2024-10-15 08:29:59.851985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.355 [2024-10-15 08:29:59.856519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.355 [2024-10-15 08:29:59.856797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.355 [2024-10-15 08:29:59.856823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.355 [2024-10-15 08:29:59.861362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.355 [2024-10-15 08:29:59.861656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.355 [2024-10-15 08:29:59.861682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.355 [2024-10-15 08:29:59.866108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.355 [2024-10-15 08:29:59.866493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.355 [2024-10-15 08:29:59.866540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.355 [2024-10-15 08:29:59.871006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.355 [2024-10-15 08:29:59.871481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.355 [2024-10-15 08:29:59.871512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.355 [2024-10-15 08:29:59.876066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.355 [2024-10-15 08:29:59.876401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.355 [2024-10-15 08:29:59.876448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.355 [2024-10-15 08:29:59.880891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.355 [2024-10-15 08:29:59.881205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.355 [2024-10-15 08:29:59.881232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.355 [2024-10-15 08:29:59.885640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.355 [2024-10-15 08:29:59.885918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.355 [2024-10-15 08:29:59.885944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.355 [2024-10-15 08:29:59.890517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.355 [2024-10-15 08:29:59.890797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.355 [2024-10-15 08:29:59.890823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:29:59.895323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:29:59.895620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:29:59.895646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:29:59.900029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:29:59.900381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:29:59.900412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:29:59.904953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:29:59.905265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:29:59.905291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:29:59.909737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:29:59.910019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:29:59.910044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:29:59.914600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:29:59.915049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:29:59.915080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:29:59.919748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:29:59.920057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:29:59.920084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:29:59.924720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:29:59.925006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:29:59.925027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:29:59.929605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:29:59.929885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:29:59.929912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:29:59.934568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:29:59.934845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:29:59.934871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:29:59.939429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:29:59.939708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:29:59.939734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:29:59.944182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:29:59.944521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:29:59.944607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:29:59.949200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:29:59.949482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:29:59.949508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:29:59.953914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:29:59.954263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:29:59.954292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:29:59.958765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:29:59.959262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:29:59.959292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:29:59.963689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:29:59.963987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:29:59.964013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:29:59.968657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:29:59.968944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:29:59.968971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:29:59.973506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:29:59.973784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:29:59.973810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:29:59.978373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:29:59.978704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:29:59.978729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:29:59.983211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:29:59.983490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:29:59.983515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:29:59.987825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:29:59.988159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:29:59.988196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:29:59.992683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:29:59.993002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:29:59.993029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:29:59.997873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:29:59.998368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:29:59.998404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:30:00.003133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:30:00.003458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:30:00.003484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:30:00.008202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:30:00.008552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:30:00.008593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:30:00.013387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:30:00.013674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:30:00.013700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:30:00.018631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:30:00.018926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:30:00.018953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:30:00.023721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:30:00.024039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:30:00.024067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:30:00.028692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:30:00.028989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.356 [2024-10-15 08:30:00.029019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.356 [2024-10-15 08:30:00.033626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.356 [2024-10-15 08:30:00.034096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.357 [2024-10-15 08:30:00.034139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.357 [2024-10-15 08:30:00.039039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.357 [2024-10-15 08:30:00.039438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.357 [2024-10-15 08:30:00.039499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.357 [2024-10-15 08:30:00.044347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.357 [2024-10-15 08:30:00.044698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.357 [2024-10-15 08:30:00.044724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.357 [2024-10-15 08:30:00.049499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.357 [2024-10-15 08:30:00.049805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.357 [2024-10-15 08:30:00.049833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.357 [2024-10-15 08:30:00.054801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.357 [2024-10-15 08:30:00.055137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.357 [2024-10-15 08:30:00.055176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.357 [2024-10-15 08:30:00.059966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.357 [2024-10-15 08:30:00.060441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.357 [2024-10-15 08:30:00.060474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.357 [2024-10-15 08:30:00.065306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.357 [2024-10-15 08:30:00.065617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.357 [2024-10-15 08:30:00.065644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.357 [2024-10-15 08:30:00.070405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.357 [2024-10-15 08:30:00.070698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.357 [2024-10-15 08:30:00.070724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.357 [2024-10-15 08:30:00.075397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.357 [2024-10-15 08:30:00.075683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.357 [2024-10-15 08:30:00.075710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.357 [2024-10-15 08:30:00.080248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.357 [2024-10-15 08:30:00.080556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.357 [2024-10-15 08:30:00.080582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.616 [2024-10-15 08:30:00.085126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.616 [2024-10-15 08:30:00.085507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.616 [2024-10-15 08:30:00.085539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.616 [2024-10-15 08:30:00.090042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.616 [2024-10-15 08:30:00.090425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.616 [2024-10-15 08:30:00.090453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.616 [2024-10-15 08:30:00.095064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.616 [2024-10-15 08:30:00.095569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.616 [2024-10-15 08:30:00.095616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.616 [2024-10-15 08:30:00.100236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.616 [2024-10-15 08:30:00.100545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.616 [2024-10-15 08:30:00.100570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.616 [2024-10-15 08:30:00.105363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.616 [2024-10-15 08:30:00.105679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.616 [2024-10-15 08:30:00.105705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.616 [2024-10-15 08:30:00.110610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.616 [2024-10-15 08:30:00.111091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.616 [2024-10-15 08:30:00.111133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.616 [2024-10-15 08:30:00.115859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.616 [2024-10-15 08:30:00.116187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.616 [2024-10-15 08:30:00.116226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.616 [2024-10-15 08:30:00.121024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.616 [2024-10-15 08:30:00.121378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.616 [2024-10-15 08:30:00.121409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.616 [2024-10-15 08:30:00.126059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.616 [2024-10-15 08:30:00.126550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.616 [2024-10-15 08:30:00.126597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.131411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.131701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.131727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.136410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.136715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.136743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.141276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.141563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.141590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.145987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.146343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.146379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.151002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.151325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.151352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.155760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.156104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.156142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.161133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.161608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.161641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.166546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.166836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.166863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.171592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.171883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.171925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.176728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.177199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.177236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.181951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.182319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.182347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.187065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.187409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.187436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.191890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.192209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.192231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.196730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.197184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.197228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.201741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.202027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.202053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.206745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.207032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.207058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.211901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.212229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.212275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.217068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.217528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.217559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.222460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.222766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.222792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.227589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.227879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.227905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.232752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.233250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.233282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.238124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.238465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.238507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.243388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.243678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.243703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.248370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.248678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.248704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.253265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.253552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.253579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.258352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.258660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.258687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.263570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.264057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.264090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.269032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.617 [2024-10-15 08:30:00.269362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.617 [2024-10-15 08:30:00.269394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.617 [2024-10-15 08:30:00.274180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.618 [2024-10-15 08:30:00.274497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.618 [2024-10-15 08:30:00.274523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.618 [2024-10-15 08:30:00.279281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.618 [2024-10-15 08:30:00.279604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.618 [2024-10-15 08:30:00.279630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.618 [2024-10-15 08:30:00.284556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.618 [2024-10-15 08:30:00.284845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.618 [2024-10-15 08:30:00.284872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.618 [2024-10-15 08:30:00.289722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.618 [2024-10-15 08:30:00.290036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.618 [2024-10-15 08:30:00.290065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.618 [2024-10-15 08:30:00.294878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.618 [2024-10-15 08:30:00.295367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.618 [2024-10-15 08:30:00.295399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.618 [2024-10-15 08:30:00.300123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.618 [2024-10-15 08:30:00.300514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.618 [2024-10-15 08:30:00.300553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.618 [2024-10-15 08:30:00.305549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.618 [2024-10-15 08:30:00.305860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.618 [2024-10-15 08:30:00.305886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.618 [2024-10-15 08:30:00.310754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.618 [2024-10-15 08:30:00.311233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.618 [2024-10-15 08:30:00.311266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.618 [2024-10-15 08:30:00.316190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.618 [2024-10-15 08:30:00.316496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.618 [2024-10-15 08:30:00.316523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.618 [2024-10-15 08:30:00.321494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.618 [2024-10-15 08:30:00.321794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.618 [2024-10-15 08:30:00.321837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.618 [2024-10-15 08:30:00.326672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.618 [2024-10-15 08:30:00.327013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.618 [2024-10-15 08:30:00.327041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.618 [2024-10-15 08:30:00.331909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.618 [2024-10-15 08:30:00.332250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.618 [2024-10-15 08:30:00.332277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.618 [2024-10-15 08:30:00.337125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.618 [2024-10-15 08:30:00.337445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.618 [2024-10-15 08:30:00.337488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.618 [2024-10-15 08:30:00.342405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.618 [2024-10-15 08:30:00.342716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.618 [2024-10-15 08:30:00.342742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.877 [2024-10-15 08:30:00.347461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.877 [2024-10-15 08:30:00.347746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.877 [2024-10-15 08:30:00.347772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.877 [2024-10-15 08:30:00.352486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.877 [2024-10-15 08:30:00.352779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.352806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.357487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.357775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.357802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.362454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.362789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.362815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.367897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.368239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.368267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.373070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.373421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.373447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.378226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.378535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.378589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.383391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.383670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.383697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.388686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.389012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.389040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.393833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.394340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.394373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.399282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.399604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.399630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.404567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.404894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.404925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.409755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.410225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.410254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.415166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.415496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.415552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.878 6045.00 IOPS, 755.62 MiB/s [2024-10-15T08:30:00.609Z] [2024-10-15 08:30:00.421880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.422369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.422402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.427288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.427609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.427631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.432492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.432798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.432825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.437756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.438232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.438265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.443130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.443455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.443496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.448212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.448507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.448534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.453382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.453682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.453710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.458455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.458761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.458789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.463628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.463967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.463995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.468981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.469511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.469541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.474465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.474799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.474827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.479726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.480064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.480092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.484974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.485450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.485482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.490364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.490681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.490708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.495443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.878 [2024-10-15 08:30:00.495753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.878 [2024-10-15 08:30:00.495780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.878 [2024-10-15 08:30:00.500497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.879 [2024-10-15 08:30:00.500795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.879 [2024-10-15 08:30:00.500821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.879 [2024-10-15 08:30:00.505686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.879 [2024-10-15 08:30:00.506031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.879 [2024-10-15 08:30:00.506059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.879 [2024-10-15 08:30:00.510812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.879 [2024-10-15 08:30:00.511137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.879 [2024-10-15 08:30:00.511178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.879 [2024-10-15 08:30:00.515904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.879 [2024-10-15 08:30:00.516252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.879 [2024-10-15 08:30:00.516293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.879 [2024-10-15 08:30:00.521239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.879 [2024-10-15 08:30:00.521540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.879 [2024-10-15 08:30:00.521568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.879 [2024-10-15 08:30:00.526329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.879 [2024-10-15 08:30:00.526649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.879 [2024-10-15 08:30:00.526675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.879 [2024-10-15 08:30:00.531537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.879 [2024-10-15 08:30:00.531837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.879 [2024-10-15 08:30:00.531865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.879 [2024-10-15 08:30:00.536685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.879 [2024-10-15 08:30:00.537135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.879 [2024-10-15 08:30:00.537168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.879 [2024-10-15 08:30:00.542109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.879 [2024-10-15 08:30:00.542449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.879 [2024-10-15 08:30:00.542486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.879 [2024-10-15 08:30:00.547347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.879 [2024-10-15 08:30:00.547645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.879 [2024-10-15 08:30:00.547672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.879 [2024-10-15 08:30:00.552508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.879 [2024-10-15 08:30:00.552811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.879 [2024-10-15 08:30:00.552838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.879 [2024-10-15 08:30:00.557621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.879 [2024-10-15 08:30:00.557922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.879 [2024-10-15 08:30:00.557950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.879 [2024-10-15 08:30:00.562780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.879 [2024-10-15 08:30:00.563077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.879 [2024-10-15 08:30:00.563105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.879 [2024-10-15 08:30:00.567843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.879 [2024-10-15 08:30:00.568163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.879 [2024-10-15 08:30:00.568203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.879 [2024-10-15 08:30:00.573148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.879 [2024-10-15 08:30:00.573609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.879 [2024-10-15 08:30:00.573642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.879 [2024-10-15 08:30:00.578533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.879 [2024-10-15 08:30:00.578836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.879 [2024-10-15 08:30:00.578864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.879 [2024-10-15 08:30:00.583810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.879 [2024-10-15 08:30:00.584121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.879 [2024-10-15 08:30:00.584144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:58.879 [2024-10-15 08:30:00.588881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.879 [2024-10-15 08:30:00.589386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.879 [2024-10-15 08:30:00.589419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:58.879 [2024-10-15 08:30:00.594371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.879 [2024-10-15 08:30:00.594675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.879 [2024-10-15 08:30:00.594703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:58.879 [2024-10-15 08:30:00.599423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.879 [2024-10-15 08:30:00.599722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.879 [2024-10-15 08:30:00.599750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:58.879 [2024-10-15 08:30:00.604643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:58.879 [2024-10-15 08:30:00.605116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:58.879 [2024-10-15 08:30:00.605166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.139 [2024-10-15 08:30:00.610040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.139 [2024-10-15 08:30:00.610359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.139 [2024-10-15 08:30:00.610394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.139 [2024-10-15 08:30:00.615308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.139 [2024-10-15 08:30:00.615621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.139 [2024-10-15 08:30:00.615649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.139 [2024-10-15 08:30:00.620501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.139 [2024-10-15 08:30:00.620799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.139 [2024-10-15 08:30:00.620826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.139 [2024-10-15 08:30:00.625644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.139 [2024-10-15 08:30:00.625961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.139 [2024-10-15 08:30:00.625989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.139 [2024-10-15 08:30:00.630878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.139 [2024-10-15 08:30:00.631225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.139 [2024-10-15 08:30:00.631253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.139 [2024-10-15 08:30:00.636277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.139 [2024-10-15 08:30:00.636596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.139 [2024-10-15 08:30:00.636623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.139 [2024-10-15 08:30:00.641619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.139 [2024-10-15 08:30:00.641933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.139 [2024-10-15 08:30:00.641978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.139 [2024-10-15 08:30:00.647011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.139 [2024-10-15 08:30:00.647339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.139 [2024-10-15 08:30:00.647376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.139 [2024-10-15 08:30:00.652276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.139 [2024-10-15 08:30:00.652560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.139 [2024-10-15 08:30:00.652586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.139 [2024-10-15 08:30:00.657347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.139 [2024-10-15 08:30:00.657649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.139 [2024-10-15 08:30:00.657677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.139 [2024-10-15 08:30:00.662434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.139 [2024-10-15 08:30:00.662754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.139 [2024-10-15 08:30:00.662781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.139 [2024-10-15 08:30:00.667546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.139 [2024-10-15 08:30:00.667840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.139 [2024-10-15 08:30:00.667867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.672631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.672945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.672973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.677714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.678025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.678053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.683023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.683335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.683368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.688137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.688600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.688633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.693717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.694016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.694044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.698810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.699125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.699166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.704050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.704545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.704577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.709544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.709832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.709858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.714537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.714851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.714877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.719377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.719665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.719690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.724332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.724647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.724672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.729122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.729447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.729494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.734050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.734421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.734454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.739239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.739538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.739563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.744150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.744438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.744463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.749033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.749365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.749396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.753770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.754062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.754088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.758710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.759178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.759223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.763784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.764064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.764090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.769028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.769360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.769392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.774302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.774601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.774629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.779585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.779884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.779911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.784939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.785282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.785310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.790274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.790596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.790623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.795443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.795742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.795771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.800706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.801022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.801050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.805993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.806470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.806503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.140 [2024-10-15 08:30:00.811472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.140 [2024-10-15 08:30:00.811768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.140 [2024-10-15 08:30:00.811794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.141 [2024-10-15 08:30:00.816614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.141 [2024-10-15 08:30:00.816926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.141 [2024-10-15 08:30:00.816970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.141 [2024-10-15 08:30:00.821902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.141 [2024-10-15 08:30:00.822376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.141 [2024-10-15 08:30:00.822408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.141 [2024-10-15 08:30:00.827203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.141 [2024-10-15 08:30:00.827510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.141 [2024-10-15 08:30:00.827538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.141 [2024-10-15 08:30:00.832411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.141 [2024-10-15 08:30:00.832744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.141 [2024-10-15 08:30:00.832771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.141 [2024-10-15 08:30:00.837706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.141 [2024-10-15 08:30:00.838192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.141 [2024-10-15 08:30:00.838231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.141 [2024-10-15 08:30:00.843274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.141 [2024-10-15 08:30:00.843584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.141 [2024-10-15 08:30:00.843611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.141 [2024-10-15 08:30:00.848420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.141 [2024-10-15 08:30:00.848739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.141 [2024-10-15 08:30:00.848767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.141 [2024-10-15 08:30:00.853707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.141 [2024-10-15 08:30:00.854199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.141 [2024-10-15 08:30:00.854231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.141 [2024-10-15 08:30:00.858984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.141 [2024-10-15 08:30:00.859333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.141 [2024-10-15 08:30:00.859362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.141 [2024-10-15 08:30:00.864207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.141 [2024-10-15 08:30:00.864535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.141 [2024-10-15 08:30:00.864562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.869213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.869540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.869567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.874403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.874706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.874734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.879793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.880111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.880149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.885044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.885408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.885441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.890215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.890532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.890558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.895418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.895776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.895803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.900666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.900972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.901000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.905883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.906398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.906432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.911204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.911522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.911549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.916412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.916728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.916756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.921318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.921611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.921637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.926114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.926469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.926501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.931249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.931583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.931609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.936278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.936571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.936597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.941246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.941540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.941566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.946422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.946735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.946761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.951604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.952065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.952097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.956977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.957291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.957318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.962189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.962487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.962524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.967297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.967607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.967635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.972556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.972859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.972886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.977709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.978022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.978050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.982908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.983405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.983439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.988223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.988539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.988566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.993162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.993460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.993487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:00.998200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:00.998501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:00.998539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:01.003415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:01.003742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:01.003767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:01.008640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:01.008936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:01.008963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:01.013782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:01.014086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:01.014113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:01.018946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:01.019466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:01.019499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:01.024264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:01.024554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:01.024580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:01.029104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:01.029460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:01.029492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:01.033937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:01.034292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:01.034326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:01.038888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:01.039415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.400 [2024-10-15 08:30:01.039447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.400 [2024-10-15 08:30:01.044046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.400 [2024-10-15 08:30:01.044347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.401 [2024-10-15 08:30:01.044373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.401 [2024-10-15 08:30:01.048920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.401 [2024-10-15 08:30:01.049259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.401 [2024-10-15 08:30:01.049306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.401 [2024-10-15 08:30:01.053875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.401 [2024-10-15 08:30:01.054220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.401 [2024-10-15 08:30:01.054246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.401 [2024-10-15 08:30:01.058724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.401 [2024-10-15 08:30:01.059203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.401 [2024-10-15 08:30:01.059259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.401 [2024-10-15 08:30:01.063859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.401 [2024-10-15 08:30:01.064162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.401 [2024-10-15 08:30:01.064188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.401 [2024-10-15 08:30:01.069133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.401 [2024-10-15 08:30:01.069455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.401 [2024-10-15 08:30:01.069488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.401 [2024-10-15 08:30:01.074267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.401 [2024-10-15 08:30:01.074582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.401 [2024-10-15 08:30:01.074611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.401 [2024-10-15 08:30:01.079546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.401 [2024-10-15 08:30:01.079838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.401 [2024-10-15 08:30:01.079865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.401 [2024-10-15 08:30:01.084772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.401 [2024-10-15 08:30:01.085096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.401 [2024-10-15 08:30:01.085134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.401 [2024-10-15 08:30:01.089978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.401 [2024-10-15 08:30:01.090343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.401 [2024-10-15 08:30:01.090377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.401 [2024-10-15 08:30:01.095241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.401 [2024-10-15 08:30:01.095585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.401 [2024-10-15 08:30:01.095610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.401 [2024-10-15 08:30:01.100220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.401 [2024-10-15 08:30:01.100528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.401 [2024-10-15 08:30:01.100554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.401 [2024-10-15 08:30:01.105090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.401 [2024-10-15 08:30:01.105437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.401 [2024-10-15 08:30:01.105468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.401 [2024-10-15 08:30:01.109916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.401 [2024-10-15 08:30:01.110261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.401 [2024-10-15 08:30:01.110317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.401 [2024-10-15 08:30:01.114910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.401 [2024-10-15 08:30:01.115429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.401 [2024-10-15 08:30:01.115462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.401 [2024-10-15 08:30:01.120255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.401 [2024-10-15 08:30:01.120587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.401 [2024-10-15 08:30:01.120613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.401 [2024-10-15 08:30:01.125500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.401 [2024-10-15 08:30:01.125789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.401 [2024-10-15 08:30:01.125816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.659 [2024-10-15 08:30:01.130605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.659 [2024-10-15 08:30:01.131050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.659 [2024-10-15 08:30:01.131098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.659 [2024-10-15 08:30:01.135885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.659 [2024-10-15 08:30:01.136216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.659 [2024-10-15 08:30:01.136244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.659 [2024-10-15 08:30:01.140856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.659 [2024-10-15 08:30:01.141163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.659 [2024-10-15 08:30:01.141218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.659 [2024-10-15 08:30:01.145789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.659 [2024-10-15 08:30:01.146290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.659 [2024-10-15 08:30:01.146315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.659 [2024-10-15 08:30:01.150903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.659 [2024-10-15 08:30:01.151221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.659 [2024-10-15 08:30:01.151246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.659 [2024-10-15 08:30:01.155736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.659 [2024-10-15 08:30:01.156025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.659 [2024-10-15 08:30:01.156051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.160616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.160906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.160932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.165415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.165701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.165729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.170429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.170736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.170762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.175655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.175942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.175971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.180744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.181029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.181055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.185801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.186313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.186350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.191145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.191457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.191490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.196158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.196443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.196470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.200921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.201241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.201298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.205800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.206306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.206341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.210863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.211171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.211207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.215907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.216229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.216253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.221008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.221349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.221381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.226147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.226509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.226535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.231356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.231683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.231709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.236563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.236851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.236877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.241745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.242244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.242277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.247000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.247345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.247376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.251793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.252079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.252104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.256715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.257004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.257030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.261555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.262017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.262049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.266593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.266897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.266923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.271895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.272245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.272273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.277022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.277346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.277378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.282090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.282564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.282596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.660 [2024-10-15 08:30:01.287393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.660 [2024-10-15 08:30:01.287715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.660 [2024-10-15 08:30:01.287741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.661 [2024-10-15 08:30:01.292432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.661 [2024-10-15 08:30:01.292750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.661 [2024-10-15 08:30:01.292776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.661 [2024-10-15 08:30:01.297405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.661 [2024-10-15 08:30:01.297698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.661 [2024-10-15 08:30:01.297726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.661 [2024-10-15 08:30:01.302279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.661 [2024-10-15 08:30:01.302596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.661 [2024-10-15 08:30:01.302622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.661 [2024-10-15 08:30:01.307217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.661 [2024-10-15 08:30:01.307511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.661 [2024-10-15 08:30:01.307551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.661 [2024-10-15 08:30:01.312224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.661 [2024-10-15 08:30:01.312543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.661 [2024-10-15 08:30:01.312569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.661 [2024-10-15 08:30:01.317113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.661 [2024-10-15 08:30:01.317476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.661 [2024-10-15 08:30:01.317513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.661 [2024-10-15 08:30:01.322474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.661 [2024-10-15 08:30:01.322822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.661 [2024-10-15 08:30:01.322849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.661 [2024-10-15 08:30:01.327840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.661 [2024-10-15 08:30:01.328155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.661 [2024-10-15 08:30:01.328195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.661 [2024-10-15 08:30:01.333016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.661 [2024-10-15 08:30:01.333392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.661 [2024-10-15 08:30:01.333425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.661 [2024-10-15 08:30:01.338301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.661 [2024-10-15 08:30:01.338599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.661 [2024-10-15 08:30:01.338626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.661 [2024-10-15 08:30:01.343464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.661 [2024-10-15 08:30:01.343811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.661 [2024-10-15 08:30:01.343858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.661 [2024-10-15 08:30:01.348751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.661 [2024-10-15 08:30:01.349050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.661 [2024-10-15 08:30:01.349082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.661 [2024-10-15 08:30:01.354000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.661 [2024-10-15 08:30:01.354525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.661 [2024-10-15 08:30:01.354571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.661 [2024-10-15 08:30:01.359471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.661 [2024-10-15 08:30:01.359791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.661 [2024-10-15 08:30:01.359817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.661 [2024-10-15 08:30:01.364690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.661 [2024-10-15 08:30:01.364977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.661 [2024-10-15 08:30:01.365003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.661 [2024-10-15 08:30:01.369884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.661 [2024-10-15 08:30:01.370397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.661 [2024-10-15 08:30:01.370430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.661 [2024-10-15 08:30:01.375419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.661 [2024-10-15 08:30:01.375759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.661 [2024-10-15 08:30:01.375784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.661 [2024-10-15 08:30:01.380624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.661 [2024-10-15 08:30:01.380923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.661 [2024-10-15 08:30:01.380952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.661 [2024-10-15 08:30:01.385658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.661 [2024-10-15 08:30:01.386109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.661 [2024-10-15 08:30:01.386159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.920 [2024-10-15 08:30:01.391092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.920 [2024-10-15 08:30:01.391409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.920 [2024-10-15 08:30:01.391437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.920 [2024-10-15 08:30:01.396123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.920 [2024-10-15 08:30:01.396473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.920 [2024-10-15 08:30:01.396500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.920 [2024-10-15 08:30:01.401113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.920 [2024-10-15 08:30:01.401427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.920 [2024-10-15 08:30:01.401453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:59.920 [2024-10-15 08:30:01.405908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.920 [2024-10-15 08:30:01.406416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.920 [2024-10-15 08:30:01.406448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:59.920 [2024-10-15 08:30:01.411030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.920 [2024-10-15 08:30:01.411345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.921 [2024-10-15 08:30:01.411372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:59.921 [2024-10-15 08:30:01.416014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c933d0) with pdu=0x2000166fef90 00:18:59.921 6029.50 IOPS, 753.69 MiB/s [2024-10-15T08:30:01.652Z] [2024-10-15 08:30:01.417601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.921 [2024-10-15 08:30:01.417636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:59.921 00:18:59.921 Latency(us) 00:18:59.921 [2024-10-15T08:30:01.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.921 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:59.921 nvme0n1 : 2.00 6024.85 753.11 0.00 0.00 2649.51 1750.11 6702.55 00:18:59.921 [2024-10-15T08:30:01.652Z] =================================================================================================================== 00:18:59.921 [2024-10-15T08:30:01.652Z] Total : 6024.85 753.11 0.00 0.00 2649.51 1750.11 6702.55 00:18:59.921 { 00:18:59.921 "results": [ 00:18:59.921 { 00:18:59.921 "job": "nvme0n1", 00:18:59.921 "core_mask": "0x2", 00:18:59.921 "workload": "randwrite", 00:18:59.921 "status": "finished", 00:18:59.921 "queue_depth": 16, 00:18:59.921 "io_size": 131072, 00:18:59.921 "runtime": 2.003868, 00:18:59.921 "iops": 6024.847944076157, 00:18:59.921 "mibps": 753.1059930095196, 00:18:59.921 "io_failed": 0, 00:18:59.921 "io_timeout": 0, 00:18:59.921 "avg_latency_us": 2649.5111538142964, 00:18:59.921 "min_latency_us": 1750.1090909090908, 00:18:59.921 "max_latency_us": 6702.545454545455 00:18:59.921 } 00:18:59.921 ], 00:18:59.921 "core_count": 1 00:18:59.921 } 00:18:59.921 08:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:59.921 08:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:59.921 08:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:59.921 08:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:59.921 | .driver_specific 00:18:59.921 | .nvme_error 00:18:59.921 | .status_code 00:18:59.921 | .command_transient_transport_error' 00:19:00.179 08:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 389 > 0 )) 00:19:00.180 08:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80758 00:19:00.180 08:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80758 ']' 00:19:00.180 08:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80758 00:19:00.180 08:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:19:00.180 08:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:00.180 08:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80758 00:19:00.180 killing process with pid 80758 00:19:00.180 Received shutdown signal, test time was about 2.000000 seconds 00:19:00.180 00:19:00.180 Latency(us) 00:19:00.180 [2024-10-15T08:30:01.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.180 [2024-10-15T08:30:01.911Z] =================================================================================================================== 00:19:00.180 [2024-10-15T08:30:01.911Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:00.180 08:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:00.180 08:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:00.180 08:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80758' 00:19:00.180 08:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80758 00:19:00.180 08:30:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80758 00:19:00.438 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80559 00:19:00.438 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 80559 ']' 00:19:00.438 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 80559 00:19:00.438 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:19:00.438 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:00.438 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80559 00:19:00.438 killing process with pid 80559 00:19:00.438 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:00.438 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:00.438 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80559' 00:19:00.438 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 80559 00:19:00.438 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 80559 00:19:00.697 ************************************ 00:19:00.697 END TEST nvmf_digest_error 00:19:00.697 ************************************ 00:19:00.697 00:19:00.697 real 0m17.454s 00:19:00.697 user 0m33.219s 00:19:00.697 sys 0m4.985s 00:19:00.697 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:00.697 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:00.697 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:00.697 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:19:00.697 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:00.697 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:19:00.955 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:00.955 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:19:00.955 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:00.955 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:00.955 rmmod nvme_tcp 00:19:00.955 rmmod nvme_fabrics 00:19:00.955 rmmod nvme_keyring 00:19:00.955 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:00.955 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:19:00.955 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:19:00.955 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 80559 ']' 00:19:00.955 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 80559 00:19:00.955 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 80559 ']' 00:19:00.955 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 80559 00:19:00.955 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (80559) - No such process 00:19:00.955 Process with pid 80559 is not found 00:19:00.955 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 80559 is not found' 00:19:00.955 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:00.955 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:00.956 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:00.956 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:19:00.956 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:19:00.956 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:00.956 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:19:00.956 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:00.956 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:00.956 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:00.956 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:00.956 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:00.956 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:00.956 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:00.956 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:00.956 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:00.956 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:00.956 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:00.956 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:01.214 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:01.214 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:01.214 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:01.214 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:01.214 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.214 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:01.214 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.214 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:19:01.214 00:19:01.214 real 0m37.000s 00:19:01.214 user 1m9.873s 00:19:01.214 sys 0m10.344s 00:19:01.214 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:01.214 ************************************ 00:19:01.214 END TEST nvmf_digest 00:19:01.214 ************************************ 00:19:01.215 08:30:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:01.215 08:30:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:19:01.215 08:30:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:19:01.215 08:30:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:01.215 08:30:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:01.215 08:30:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:01.215 08:30:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.215 ************************************ 00:19:01.215 START TEST nvmf_host_multipath 00:19:01.215 ************************************ 00:19:01.215 08:30:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:01.215 * Looking for test storage... 00:19:01.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:01.215 08:30:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:01.215 08:30:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:19:01.215 08:30:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:01.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.475 --rc genhtml_branch_coverage=1 00:19:01.475 --rc genhtml_function_coverage=1 00:19:01.475 --rc genhtml_legend=1 00:19:01.475 --rc geninfo_all_blocks=1 00:19:01.475 --rc geninfo_unexecuted_blocks=1 00:19:01.475 00:19:01.475 ' 00:19:01.475 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:01.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.475 --rc genhtml_branch_coverage=1 00:19:01.475 --rc genhtml_function_coverage=1 00:19:01.475 --rc genhtml_legend=1 00:19:01.475 --rc geninfo_all_blocks=1 00:19:01.475 --rc geninfo_unexecuted_blocks=1 00:19:01.475 00:19:01.476 ' 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:01.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.476 --rc genhtml_branch_coverage=1 00:19:01.476 --rc genhtml_function_coverage=1 00:19:01.476 --rc genhtml_legend=1 00:19:01.476 --rc geninfo_all_blocks=1 00:19:01.476 --rc geninfo_unexecuted_blocks=1 00:19:01.476 00:19:01.476 ' 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:01.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.476 --rc genhtml_branch_coverage=1 00:19:01.476 --rc genhtml_function_coverage=1 00:19:01.476 --rc genhtml_legend=1 00:19:01.476 --rc geninfo_all_blocks=1 00:19:01.476 --rc geninfo_unexecuted_blocks=1 00:19:01.476 00:19:01.476 ' 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:01.476 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@458 -- # nvmf_veth_init 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:01.476 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:01.477 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:01.477 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:01.477 Cannot find device "nvmf_init_br" 00:19:01.477 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:19:01.477 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:01.477 Cannot find device "nvmf_init_br2" 00:19:01.477 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:19:01.477 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:01.477 Cannot find device "nvmf_tgt_br" 00:19:01.477 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:19:01.477 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:01.477 Cannot find device "nvmf_tgt_br2" 00:19:01.477 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:19:01.477 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:01.477 Cannot find device "nvmf_init_br" 00:19:01.477 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:19:01.477 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:01.477 Cannot find device "nvmf_init_br2" 00:19:01.477 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:19:01.477 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:01.477 Cannot find device "nvmf_tgt_br" 00:19:01.477 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:19:01.477 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:01.477 Cannot find device "nvmf_tgt_br2" 00:19:01.477 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:19:01.477 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:01.477 Cannot find device "nvmf_br" 00:19:01.477 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:19:01.477 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:01.736 Cannot find device "nvmf_init_if" 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:01.736 Cannot find device "nvmf_init_if2" 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:01.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:01.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:01.736 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:01.736 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:19:01.736 00:19:01.736 --- 10.0.0.3 ping statistics --- 00:19:01.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.736 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:01.736 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:01.736 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:19:01.736 00:19:01.736 --- 10.0.0.4 ping statistics --- 00:19:01.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.736 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:01.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:01.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:19:01.736 00:19:01.736 --- 10.0.0.1 ping statistics --- 00:19:01.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.736 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:01.736 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:01.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:01.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:19:01.995 00:19:01.995 --- 10.0.0.2 ping statistics --- 00:19:01.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.995 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:19:01.995 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:01.995 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # return 0 00:19:01.995 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:01.995 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:01.995 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:01.995 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:01.995 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:01.995 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:01.995 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:01.995 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:19:01.995 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:01.995 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:01.995 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:01.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.995 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # nvmfpid=81075 00:19:01.995 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:01.995 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # waitforlisten 81075 00:19:01.995 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 81075 ']' 00:19:01.995 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.995 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:01.995 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.995 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:01.995 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:01.995 [2024-10-15 08:30:03.562024] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:19:01.995 [2024-10-15 08:30:03.562172] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.995 [2024-10-15 08:30:03.705355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:02.254 [2024-10-15 08:30:03.781155] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.254 [2024-10-15 08:30:03.781696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.254 [2024-10-15 08:30:03.781956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.254 [2024-10-15 08:30:03.782283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.254 [2024-10-15 08:30:03.782634] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.254 [2024-10-15 08:30:03.784247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.254 [2024-10-15 08:30:03.784257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.254 [2024-10-15 08:30:03.858210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:02.254 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:02.254 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:19:02.254 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:02.254 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:02.254 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:02.513 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.513 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=81075 00:19:02.513 08:30:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:02.772 [2024-10-15 08:30:04.287489] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.772 08:30:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:03.030 Malloc0 00:19:03.030 08:30:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:03.596 08:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:03.854 08:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:03.854 [2024-10-15 08:30:05.571938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:04.112 08:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:04.371 [2024-10-15 08:30:05.872083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:04.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.371 08:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81123 00:19:04.371 08:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:04.371 08:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:04.371 08:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81123 /var/tmp/bdevperf.sock 00:19:04.371 08:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 81123 ']' 00:19:04.371 08:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.371 08:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:04.371 08:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.371 08:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:04.371 08:30:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:05.307 08:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:05.307 08:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:19:05.307 08:30:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:05.566 08:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:05.825 Nvme0n1 00:19:06.084 08:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:06.371 Nvme0n1 00:19:06.371 08:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:19:06.371 08:30:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:07.310 08:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:19:07.310 08:30:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:07.569 08:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:08.136 08:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:19:08.136 08:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81174 00:19:08.136 08:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:08.136 08:30:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81075 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:14.702 08:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:14.702 08:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:14.702 08:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:14.702 08:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:14.702 Attaching 4 probes... 00:19:14.702 @path[10.0.0.3, 4421]: 17222 00:19:14.702 @path[10.0.0.3, 4421]: 17595 00:19:14.702 @path[10.0.0.3, 4421]: 17845 00:19:14.702 @path[10.0.0.3, 4421]: 17647 00:19:14.702 @path[10.0.0.3, 4421]: 17647 00:19:14.702 08:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:14.702 08:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:14.702 08:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:14.702 08:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:14.702 08:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:14.702 08:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:14.702 08:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81174 00:19:14.702 08:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:14.702 08:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:19:14.702 08:30:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:14.702 08:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:14.961 08:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:19:14.961 08:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81287 00:19:14.961 08:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:14.961 08:30:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81075 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:21.528 08:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:21.528 08:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:21.528 08:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:21.528 08:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:21.528 Attaching 4 probes... 00:19:21.528 @path[10.0.0.3, 4420]: 17627 00:19:21.528 @path[10.0.0.3, 4420]: 18307 00:19:21.528 @path[10.0.0.3, 4420]: 18392 00:19:21.528 @path[10.0.0.3, 4420]: 18367 00:19:21.528 @path[10.0.0.3, 4420]: 18336 00:19:21.528 08:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:21.528 08:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:21.528 08:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:21.528 08:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:21.528 08:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:21.528 08:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:21.528 08:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81287 00:19:21.528 08:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:21.528 08:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:19:21.528 08:30:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:21.528 08:30:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:21.786 08:30:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:19:21.786 08:30:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81075 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:21.786 08:30:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81401 00:19:21.786 08:30:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:28.351 08:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:28.351 08:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:28.351 08:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:28.351 08:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:28.351 Attaching 4 probes... 00:19:28.351 @path[10.0.0.3, 4421]: 13360 00:19:28.351 @path[10.0.0.3, 4421]: 17664 00:19:28.351 @path[10.0.0.3, 4421]: 18056 00:19:28.351 @path[10.0.0.3, 4421]: 17840 00:19:28.351 @path[10.0.0.3, 4421]: 17704 00:19:28.351 08:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:28.351 08:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:28.351 08:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:28.351 08:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:28.351 08:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:28.351 08:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:28.351 08:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81401 00:19:28.351 08:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:28.351 08:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:19:28.351 08:30:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:28.351 08:30:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:28.610 08:30:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:19:28.610 08:30:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81518 00:19:28.610 08:30:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:28.610 08:30:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81075 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:35.205 08:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:35.205 08:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:19:35.205 08:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:19:35.205 08:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:35.205 Attaching 4 probes... 00:19:35.205 00:19:35.205 00:19:35.205 00:19:35.205 00:19:35.205 00:19:35.205 08:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:35.205 08:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:35.205 08:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:35.205 08:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:19:35.205 08:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:19:35.205 08:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:19:35.205 08:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81518 00:19:35.205 08:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:35.205 08:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:19:35.205 08:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:35.205 08:30:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:35.462 08:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:19:35.462 08:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81075 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:35.462 08:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81635 00:19:35.462 08:30:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:42.070 08:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:42.070 08:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:42.070 08:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:42.070 08:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:42.070 Attaching 4 probes... 00:19:42.070 @path[10.0.0.3, 4421]: 17092 00:19:42.070 @path[10.0.0.3, 4421]: 17378 00:19:42.070 @path[10.0.0.3, 4421]: 17303 00:19:42.070 @path[10.0.0.3, 4421]: 17272 00:19:42.070 @path[10.0.0.3, 4421]: 17296 00:19:42.070 08:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:42.070 08:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:42.070 08:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:42.070 08:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:42.070 08:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:42.070 08:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:42.070 08:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81635 00:19:42.070 08:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:42.070 08:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:42.070 08:30:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:19:43.447 08:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:19:43.447 08:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81754 00:19:43.447 08:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81075 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:43.447 08:30:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:50.009 08:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:50.009 08:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:50.009 08:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:50.009 08:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:50.009 Attaching 4 probes... 00:19:50.009 @path[10.0.0.3, 4420]: 17431 00:19:50.009 @path[10.0.0.3, 4420]: 17297 00:19:50.009 @path[10.0.0.3, 4420]: 17235 00:19:50.009 @path[10.0.0.3, 4420]: 17313 00:19:50.009 @path[10.0.0.3, 4420]: 17164 00:19:50.009 08:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:50.009 08:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:50.009 08:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:50.009 08:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:50.009 08:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:50.009 08:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:50.009 08:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81754 00:19:50.009 08:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:50.009 08:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:50.009 [2024-10-15 08:30:51.360575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:50.009 08:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:50.009 08:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:56.572 08:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:56.572 08:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81075 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:56.572 08:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81934 00:19:56.572 08:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:03.147 08:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:03.147 08:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:03.147 08:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:03.147 08:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:03.147 Attaching 4 probes... 00:20:03.147 @path[10.0.0.3, 4421]: 17100 00:20:03.147 @path[10.0.0.3, 4421]: 17212 00:20:03.147 @path[10.0.0.3, 4421]: 17100 00:20:03.147 @path[10.0.0.3, 4421]: 17088 00:20:03.147 @path[10.0.0.3, 4421]: 17716 00:20:03.147 08:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:03.147 08:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:03.147 08:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:03.147 08:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:03.147 08:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:03.147 08:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:03.147 08:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81934 00:20:03.147 08:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:03.147 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81123 00:20:03.147 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 81123 ']' 00:20:03.147 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 81123 00:20:03.147 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:20:03.147 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.147 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81123 00:20:03.147 killing process with pid 81123 00:20:03.147 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:03.147 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:03.147 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81123' 00:20:03.147 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 81123 00:20:03.147 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 81123 00:20:03.147 { 00:20:03.147 "results": [ 00:20:03.147 { 00:20:03.147 "job": "Nvme0n1", 00:20:03.147 "core_mask": "0x4", 00:20:03.147 "workload": "verify", 00:20:03.147 "status": "terminated", 00:20:03.147 "verify_range": { 00:20:03.147 "start": 0, 00:20:03.147 "length": 16384 00:20:03.147 }, 00:20:03.147 "queue_depth": 128, 00:20:03.147 "io_size": 4096, 00:20:03.147 "runtime": 55.963947, 00:20:03.147 "iops": 7478.17161645157, 00:20:03.147 "mibps": 29.211607876763946, 00:20:03.147 "io_failed": 0, 00:20:03.147 "io_timeout": 0, 00:20:03.147 "avg_latency_us": 17084.54623425033, 00:20:03.147 "min_latency_us": 178.73454545454547, 00:20:03.147 "max_latency_us": 7046430.72 00:20:03.147 } 00:20:03.147 ], 00:20:03.147 "core_count": 1 00:20:03.147 } 00:20:03.147 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81123 00:20:03.147 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:03.147 [2024-10-15 08:30:05.952935] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:20:03.147 [2024-10-15 08:30:05.953081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81123 ] 00:20:03.147 [2024-10-15 08:30:06.093419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.147 [2024-10-15 08:30:06.187563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.147 [2024-10-15 08:30:06.269469] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:03.147 Running I/O for 90 seconds... 00:20:03.147 6820.00 IOPS, 26.64 MiB/s [2024-10-15T08:31:04.878Z] 7725.50 IOPS, 30.18 MiB/s [2024-10-15T08:31:04.878Z] 8101.00 IOPS, 31.64 MiB/s [2024-10-15T08:31:04.878Z] 8281.75 IOPS, 32.35 MiB/s [2024-10-15T08:31:04.878Z] 8415.00 IOPS, 32.87 MiB/s [2024-10-15T08:31:04.878Z] 8484.50 IOPS, 33.14 MiB/s [2024-10-15T08:31:04.878Z] 8531.29 IOPS, 33.33 MiB/s [2024-10-15T08:31:04.878Z] 8557.88 IOPS, 33.43 MiB/s [2024-10-15T08:31:04.878Z] [2024-10-15 08:30:16.503952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.147 [2024-10-15 08:30:16.504054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:03.147 [2024-10-15 08:30:16.504171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:53696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.147 [2024-10-15 08:30:16.504196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:03.147 [2024-10-15 08:30:16.504221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.147 [2024-10-15 08:30:16.504238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:03.147 [2024-10-15 08:30:16.504260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:53712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.147 [2024-10-15 08:30:16.504276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:03.147 [2024-10-15 08:30:16.504299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.147 [2024-10-15 08:30:16.504314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:03.147 [2024-10-15 08:30:16.504336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.147 [2024-10-15 08:30:16.504353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:03.147 [2024-10-15 08:30:16.504375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.147 [2024-10-15 08:30:16.504402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:03.147 [2024-10-15 08:30:16.504439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:53744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.147 [2024-10-15 08:30:16.504454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:03.147 [2024-10-15 08:30:16.504491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.147 [2024-10-15 08:30:16.504506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:03.147 [2024-10-15 08:30:16.504527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.147 [2024-10-15 08:30:16.504577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:03.147 [2024-10-15 08:30:16.504600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.147 [2024-10-15 08:30:16.504616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:03.147 [2024-10-15 08:30:16.504636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.147 [2024-10-15 08:30:16.504651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:03.147 [2024-10-15 08:30:16.504671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.504686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.504707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.504723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.504744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:53160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.504759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.504779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.504794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.504814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.504829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.504850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:53184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.504875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.504896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.504911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.504931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.504946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.504966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.504981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:53216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.505023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.505060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.505114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.148 [2024-10-15 08:30:16.505187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:53760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.148 [2024-10-15 08:30:16.505229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.148 [2024-10-15 08:30:16.505267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.148 [2024-10-15 08:30:16.505304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.148 [2024-10-15 08:30:16.505341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.148 [2024-10-15 08:30:16.505379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.148 [2024-10-15 08:30:16.505418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.148 [2024-10-15 08:30:16.505485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.505531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.505567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.505613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.505648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.505684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.505719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.505754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.505789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.505824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.505859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:53320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.505901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:53328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.505936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:53336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.505971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.505992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:53344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.506007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.506052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.506085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.506107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.148 [2024-10-15 08:30:16.506123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:03.148 [2024-10-15 08:30:16.506149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.149 [2024-10-15 08:30:16.506188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.506213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.149 [2024-10-15 08:30:16.506230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.506252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.149 [2024-10-15 08:30:16.506268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.506290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.149 [2024-10-15 08:30:16.506305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.506327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.149 [2024-10-15 08:30:16.506343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.506365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.149 [2024-10-15 08:30:16.506380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.506403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.149 [2024-10-15 08:30:16.506418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.506441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.149 [2024-10-15 08:30:16.506457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.506478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.149 [2024-10-15 08:30:16.506513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.506534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.149 [2024-10-15 08:30:16.506549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.506570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.149 [2024-10-15 08:30:16.506593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.506616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.149 [2024-10-15 08:30:16.506632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.506653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.149 [2024-10-15 08:30:16.506668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.506689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.149 [2024-10-15 08:30:16.506705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.506726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.149 [2024-10-15 08:30:16.506741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.506763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.149 [2024-10-15 08:30:16.506778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.506799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.149 [2024-10-15 08:30:16.506815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.506838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.149 [2024-10-15 08:30:16.506856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.506877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.149 [2024-10-15 08:30:16.506893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.506914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.149 [2024-10-15 08:30:16.506929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.506951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.149 [2024-10-15 08:30:16.506966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.506987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.149 [2024-10-15 08:30:16.507002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.507023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.149 [2024-10-15 08:30:16.507048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.507070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.149 [2024-10-15 08:30:16.507086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.507107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.149 [2024-10-15 08:30:16.507123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.507157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.149 [2024-10-15 08:30:16.507175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.507196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.149 [2024-10-15 08:30:16.507211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.507241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.149 [2024-10-15 08:30:16.507256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.507277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.149 [2024-10-15 08:30:16.507292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.507314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.149 [2024-10-15 08:30:16.507331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.507353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.149 [2024-10-15 08:30:16.507368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.507390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.149 [2024-10-15 08:30:16.507413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.507434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.149 [2024-10-15 08:30:16.507450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.507472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.149 [2024-10-15 08:30:16.507488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.507518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.149 [2024-10-15 08:30:16.507532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.507561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.149 [2024-10-15 08:30:16.507578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.507599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.149 [2024-10-15 08:30:16.507614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:03.149 [2024-10-15 08:30:16.507636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.149 [2024-10-15 08:30:16.507651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.507672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.150 [2024-10-15 08:30:16.507687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.507716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.150 [2024-10-15 08:30:16.507731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.507755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.150 [2024-10-15 08:30:16.507772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.507793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.150 [2024-10-15 08:30:16.507808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.507830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.150 [2024-10-15 08:30:16.507850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.507871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.150 [2024-10-15 08:30:16.507889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.507910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.150 [2024-10-15 08:30:16.507925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.507946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.150 [2024-10-15 08:30:16.507962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.507984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.150 [2024-10-15 08:30:16.507999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.150 [2024-10-15 08:30:16.508044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:53536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:53544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.508968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.508983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.510648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.150 [2024-10-15 08:30:16.510691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.510722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.150 [2024-10-15 08:30:16.510740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.510762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.150 [2024-10-15 08:30:16.510777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.150 [2024-10-15 08:30:16.510799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.150 [2024-10-15 08:30:16.510814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:16.510848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.151 [2024-10-15 08:30:16.510864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:16.510885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.151 [2024-10-15 08:30:16.510900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:16.510921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.151 [2024-10-15 08:30:16.510937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:16.510958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.151 [2024-10-15 08:30:16.510973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:16.511008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.151 [2024-10-15 08:30:16.511028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:03.151 8574.22 IOPS, 33.49 MiB/s [2024-10-15T08:31:04.882Z] 8612.80 IOPS, 33.64 MiB/s [2024-10-15T08:31:04.882Z] 8661.82 IOPS, 33.84 MiB/s [2024-10-15T08:31:04.882Z] 8708.00 IOPS, 34.02 MiB/s [2024-10-15T08:31:04.882Z] 8744.62 IOPS, 34.16 MiB/s [2024-10-15T08:31:04.882Z] 8776.00 IOPS, 34.28 MiB/s [2024-10-15T08:31:04.882Z] 8804.80 IOPS, 34.39 MiB/s [2024-10-15T08:31:04.882Z] [2024-10-15 08:30:23.129475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.151 [2024-10-15 08:30:23.129559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.129654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.151 [2024-10-15 08:30:23.129675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.129698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.151 [2024-10-15 08:30:23.129730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.129786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.151 [2024-10-15 08:30:23.129802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.129824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.151 [2024-10-15 08:30:23.129853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.129875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.151 [2024-10-15 08:30:23.129890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.129910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.151 [2024-10-15 08:30:23.129942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.129963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.151 [2024-10-15 08:30:23.129978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.130000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.151 [2024-10-15 08:30:23.130015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.130037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.151 [2024-10-15 08:30:23.130052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.130073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.151 [2024-10-15 08:30:23.130088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.130114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.151 [2024-10-15 08:30:23.130129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.130150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.151 [2024-10-15 08:30:23.130190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.130214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.151 [2024-10-15 08:30:23.130230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.130251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.151 [2024-10-15 08:30:23.130266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.130287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.151 [2024-10-15 08:30:23.130311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.130335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.151 [2024-10-15 08:30:23.130351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.130376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.151 [2024-10-15 08:30:23.130392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.130414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.151 [2024-10-15 08:30:23.130430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.130451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.151 [2024-10-15 08:30:23.130466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.130517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.151 [2024-10-15 08:30:23.130532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.130552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.151 [2024-10-15 08:30:23.130567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.130587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.151 [2024-10-15 08:30:23.130602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:03.151 [2024-10-15 08:30:23.130626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.151 [2024-10-15 08:30:23.130641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.130810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.152 [2024-10-15 08:30:23.130833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.130855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.152 [2024-10-15 08:30:23.130870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.130891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.152 [2024-10-15 08:30:23.130906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.130945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.152 [2024-10-15 08:30:23.130971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.130994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.152 [2024-10-15 08:30:23.131011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.152 [2024-10-15 08:30:23.131048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.152 [2024-10-15 08:30:23.131084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.152 [2024-10-15 08:30:23.131121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.152 [2024-10-15 08:30:23.131179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.152 [2024-10-15 08:30:23.131223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.152 [2024-10-15 08:30:23.131261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.152 [2024-10-15 08:30:23.131298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.152 [2024-10-15 08:30:23.131336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.152 [2024-10-15 08:30:23.131373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.152 [2024-10-15 08:30:23.131410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.152 [2024-10-15 08:30:23.131448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.152 [2024-10-15 08:30:23.131508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.152 [2024-10-15 08:30:23.131560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.152 [2024-10-15 08:30:23.131595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.152 [2024-10-15 08:30:23.131630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.152 [2024-10-15 08:30:23.131679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.152 [2024-10-15 08:30:23.131714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.152 [2024-10-15 08:30:23.131748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.152 [2024-10-15 08:30:23.131782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.152 [2024-10-15 08:30:23.131816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.152 [2024-10-15 08:30:23.131851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.152 [2024-10-15 08:30:23.131885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.152 [2024-10-15 08:30:23.131937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.131988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.152 [2024-10-15 08:30:23.132005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.132026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.152 [2024-10-15 08:30:23.132042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.132065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.152 [2024-10-15 08:30:23.132081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.132103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.152 [2024-10-15 08:30:23.132130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.132154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.152 [2024-10-15 08:30:23.132170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.132192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.152 [2024-10-15 08:30:23.132207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.132236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.152 [2024-10-15 08:30:23.132251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.132273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.152 [2024-10-15 08:30:23.132303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.132339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.152 [2024-10-15 08:30:23.132353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:03.152 [2024-10-15 08:30:23.132374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.152 [2024-10-15 08:30:23.132389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.132409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.153 [2024-10-15 08:30:23.132424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.132459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.153 [2024-10-15 08:30:23.132473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.132493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.153 [2024-10-15 08:30:23.132519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.132542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.153 [2024-10-15 08:30:23.132557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.132576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.153 [2024-10-15 08:30:23.132591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.132610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.153 [2024-10-15 08:30:23.132624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.132644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.153 [2024-10-15 08:30:23.132677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.132713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.153 [2024-10-15 08:30:23.132728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.132750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.153 [2024-10-15 08:30:23.132765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.132787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.153 [2024-10-15 08:30:23.132802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.132839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.153 [2024-10-15 08:30:23.132858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.132880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.153 [2024-10-15 08:30:23.132896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.132934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.153 [2024-10-15 08:30:23.132950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.132972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.153 [2024-10-15 08:30:23.132987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.133009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.153 [2024-10-15 08:30:23.133059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.133085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.153 [2024-10-15 08:30:23.133109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.133147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.153 [2024-10-15 08:30:23.133164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.133186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.153 [2024-10-15 08:30:23.133201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.133223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.153 [2024-10-15 08:30:23.133238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.133260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.153 [2024-10-15 08:30:23.133276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.133312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.153 [2024-10-15 08:30:23.133327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.133348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.153 [2024-10-15 08:30:23.133363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.133384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.153 [2024-10-15 08:30:23.133399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.133420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.153 [2024-10-15 08:30:23.133435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.133457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.153 [2024-10-15 08:30:23.133472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.133492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.153 [2024-10-15 08:30:23.133508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.133528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.153 [2024-10-15 08:30:23.133543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.133573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.153 [2024-10-15 08:30:23.133589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.133610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.153 [2024-10-15 08:30:23.133625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.133647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.153 [2024-10-15 08:30:23.133662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.133683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.153 [2024-10-15 08:30:23.133697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.133719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.153 [2024-10-15 08:30:23.133739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.133761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.153 [2024-10-15 08:30:23.133776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.133797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.153 [2024-10-15 08:30:23.133812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:03.153 [2024-10-15 08:30:23.133833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.154 [2024-10-15 08:30:23.133848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.133869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.154 [2024-10-15 08:30:23.133884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.133905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.154 [2024-10-15 08:30:23.133938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.133960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.154 [2024-10-15 08:30:23.133975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.133997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.154 [2024-10-15 08:30:23.134012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.134041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.154 [2024-10-15 08:30:23.134058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.134080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.154 [2024-10-15 08:30:23.134096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.134118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.154 [2024-10-15 08:30:23.134133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.134172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.154 [2024-10-15 08:30:23.134192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.134213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.154 [2024-10-15 08:30:23.134229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.134250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.154 [2024-10-15 08:30:23.134266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.134288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.154 [2024-10-15 08:30:23.134304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.134325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.154 [2024-10-15 08:30:23.134341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.134363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.154 [2024-10-15 08:30:23.134384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.134407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.154 [2024-10-15 08:30:23.134422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.135178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.154 [2024-10-15 08:30:23.135207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.135242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.154 [2024-10-15 08:30:23.135259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.135300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.154 [2024-10-15 08:30:23.135317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.135346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.154 [2024-10-15 08:30:23.135362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.135390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.154 [2024-10-15 08:30:23.135406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.135434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.154 [2024-10-15 08:30:23.135449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.135478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.154 [2024-10-15 08:30:23.135495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.135538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.154 [2024-10-15 08:30:23.135553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.135596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.154 [2024-10-15 08:30:23.135616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.135644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.154 [2024-10-15 08:30:23.135660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.135688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.154 [2024-10-15 08:30:23.135703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.135730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.154 [2024-10-15 08:30:23.135745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:23.135773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.154 [2024-10-15 08:30:23.135788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:03.154 8282.00 IOPS, 32.35 MiB/s [2024-10-15T08:31:04.885Z] 8289.41 IOPS, 32.38 MiB/s [2024-10-15T08:31:04.885Z] 8318.00 IOPS, 32.49 MiB/s [2024-10-15T08:31:04.885Z] 8360.00 IOPS, 32.66 MiB/s [2024-10-15T08:31:04.885Z] 8384.60 IOPS, 32.75 MiB/s [2024-10-15T08:31:04.885Z] 8405.90 IOPS, 32.84 MiB/s [2024-10-15T08:31:04.885Z] 8419.45 IOPS, 32.89 MiB/s [2024-10-15T08:31:04.885Z] [2024-10-15 08:30:30.254143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:52512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.154 [2024-10-15 08:30:30.254258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:30.254325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.154 [2024-10-15 08:30:30.254348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:30.254381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:52528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.154 [2024-10-15 08:30:30.254396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:30.254418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:52536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.154 [2024-10-15 08:30:30.254433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:30.254455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:52544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.154 [2024-10-15 08:30:30.254484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:30.254519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.154 [2024-10-15 08:30:30.254533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:30.254553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.154 [2024-10-15 08:30:30.254567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:30.254587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.154 [2024-10-15 08:30:30.254601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:30.254621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.154 [2024-10-15 08:30:30.254634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:03.154 [2024-10-15 08:30:30.254655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:51944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.154 [2024-10-15 08:30:30.254669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.254689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.254703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.254723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.254737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.254757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:51968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.254772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.254802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.254817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.254837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.254852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.254872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.254886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.254906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:52000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.254935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.254959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.254974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.254995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.255010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.255045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.255098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.255135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.255171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.255227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.255265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.255312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.255348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.255386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.255422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.255474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.255510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.255545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:52128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.255596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:52136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.255630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:52144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.255664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:52152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.255699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:52160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.255733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.255774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:52176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.255810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.155 [2024-10-15 08:30:30.255845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.155 [2024-10-15 08:30:30.255913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.155 [2024-10-15 08:30:30.255967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.255988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.155 [2024-10-15 08:30:30.256002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.256023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.155 [2024-10-15 08:30:30.256038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.256058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:52608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.155 [2024-10-15 08:30:30.256090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.256111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.155 [2024-10-15 08:30:30.256127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:03.155 [2024-10-15 08:30:30.256149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.156 [2024-10-15 08:30:30.256164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.256208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.156 [2024-10-15 08:30:30.256228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.256250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.156 [2024-10-15 08:30:30.256265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.256287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.156 [2024-10-15 08:30:30.256313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.256337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:52208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.156 [2024-10-15 08:30:30.256352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.256374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:52216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.156 [2024-10-15 08:30:30.256389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.256411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.156 [2024-10-15 08:30:30.256426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.256463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.156 [2024-10-15 08:30:30.256478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.256499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:52240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.156 [2024-10-15 08:30:30.256514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.256534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:52248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.156 [2024-10-15 08:30:30.256549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.256570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.156 [2024-10-15 08:30:30.256585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.256606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.156 [2024-10-15 08:30:30.256620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.256641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.156 [2024-10-15 08:30:30.256655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.256676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.156 [2024-10-15 08:30:30.256691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.256712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.156 [2024-10-15 08:30:30.256726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.256747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:52680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.156 [2024-10-15 08:30:30.256763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.256794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.156 [2024-10-15 08:30:30.256811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.256832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.156 [2024-10-15 08:30:30.256847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.256869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.156 [2024-10-15 08:30:30.256884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.256905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.156 [2024-10-15 08:30:30.256919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.256940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:52720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.156 [2024-10-15 08:30:30.256955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.256975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.156 [2024-10-15 08:30:30.256990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.257011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.156 [2024-10-15 08:30:30.257026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.257047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.156 [2024-10-15 08:30:30.257079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.257100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.156 [2024-10-15 08:30:30.257121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.257142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.156 [2024-10-15 08:30:30.257171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.257195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:52256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.156 [2024-10-15 08:30:30.257211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.257233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:52264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.156 [2024-10-15 08:30:30.257248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.257277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:52272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.156 [2024-10-15 08:30:30.257294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:03.156 [2024-10-15 08:30:30.257315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.156 [2024-10-15 08:30:30.257331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.257352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.157 [2024-10-15 08:30:30.257372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.257394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.157 [2024-10-15 08:30:30.257410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.257447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:52304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.157 [2024-10-15 08:30:30.257461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.257483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.157 [2024-10-15 08:30:30.257498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.257534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.157 [2024-10-15 08:30:30.257553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.257575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:52776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.157 [2024-10-15 08:30:30.257590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.257611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.157 [2024-10-15 08:30:30.257626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.257646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.157 [2024-10-15 08:30:30.257661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.257682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:52800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.157 [2024-10-15 08:30:30.257698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.257719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.157 [2024-10-15 08:30:30.257734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.257754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.157 [2024-10-15 08:30:30.257777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.257799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.157 [2024-10-15 08:30:30.257815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.257836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.157 [2024-10-15 08:30:30.257851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.257872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.157 [2024-10-15 08:30:30.257886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.257907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:52848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.157 [2024-10-15 08:30:30.257922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.257943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.157 [2024-10-15 08:30:30.257957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.257978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.157 [2024-10-15 08:30:30.257992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.258013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.157 [2024-10-15 08:30:30.258029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.258050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.157 [2024-10-15 08:30:30.258081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.258103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:52888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.157 [2024-10-15 08:30:30.258118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.258139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:52320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.157 [2024-10-15 08:30:30.258178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.258203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.157 [2024-10-15 08:30:30.258218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.258240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:52336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.157 [2024-10-15 08:30:30.258263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.258286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:52344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.157 [2024-10-15 08:30:30.258302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.258323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:52352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.157 [2024-10-15 08:30:30.258339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.258362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:52360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.157 [2024-10-15 08:30:30.258377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.258398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.157 [2024-10-15 08:30:30.258413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.258435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.157 [2024-10-15 08:30:30.258450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.258471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:52384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.157 [2024-10-15 08:30:30.258502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.258522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:52392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.157 [2024-10-15 08:30:30.258537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.258558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.157 [2024-10-15 08:30:30.258573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.258594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:52408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.157 [2024-10-15 08:30:30.258609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.258629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.157 [2024-10-15 08:30:30.258644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.258665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.157 [2024-10-15 08:30:30.258680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.258701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:52432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.157 [2024-10-15 08:30:30.258716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.258744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.157 [2024-10-15 08:30:30.258760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.258781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.157 [2024-10-15 08:30:30.258796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.258817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.157 [2024-10-15 08:30:30.258832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:03.157 [2024-10-15 08:30:30.258853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.158 [2024-10-15 08:30:30.258868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:30.258888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.158 [2024-10-15 08:30:30.258903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:30.258945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:52928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.158 [2024-10-15 08:30:30.258961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:30.258983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.158 [2024-10-15 08:30:30.258998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:30.259019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:52944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.158 [2024-10-15 08:30:30.259034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:30.259056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:52952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.158 [2024-10-15 08:30:30.259071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:30.259092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.158 [2024-10-15 08:30:30.259107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:30.259128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.158 [2024-10-15 08:30:30.259143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:30.259179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.158 [2024-10-15 08:30:30.259195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:30.259224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.158 [2024-10-15 08:30:30.259240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:30.259262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:52480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.158 [2024-10-15 08:30:30.259277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:30.259299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:52488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.158 [2024-10-15 08:30:30.259314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:30.259336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.158 [2024-10-15 08:30:30.259351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:30.259793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.158 [2024-10-15 08:30:30.259820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:03.158 8118.09 IOPS, 31.71 MiB/s [2024-10-15T08:31:04.889Z] 7779.83 IOPS, 30.39 MiB/s [2024-10-15T08:31:04.889Z] 7468.64 IOPS, 29.17 MiB/s [2024-10-15T08:31:04.889Z] 7181.38 IOPS, 28.05 MiB/s [2024-10-15T08:31:04.889Z] 6915.41 IOPS, 27.01 MiB/s [2024-10-15T08:31:04.889Z] 6668.43 IOPS, 26.05 MiB/s [2024-10-15T08:31:04.889Z] 6438.48 IOPS, 25.15 MiB/s [2024-10-15T08:31:04.889Z] 6461.63 IOPS, 25.24 MiB/s [2024-10-15T08:31:04.889Z] 6534.10 IOPS, 25.52 MiB/s [2024-10-15T08:31:04.889Z] 6598.91 IOPS, 25.78 MiB/s [2024-10-15T08:31:04.889Z] 6661.73 IOPS, 26.02 MiB/s [2024-10-15T08:31:04.889Z] 6720.62 IOPS, 26.25 MiB/s [2024-10-15T08:31:04.889Z] 6774.77 IOPS, 26.46 MiB/s [2024-10-15T08:31:04.889Z] [2024-10-15 08:30:43.741310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.158 [2024-10-15 08:30:43.741407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:43.741476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.158 [2024-10-15 08:30:43.741499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:43.741523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.158 [2024-10-15 08:30:43.741539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:43.741561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.158 [2024-10-15 08:30:43.741576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:43.741598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.158 [2024-10-15 08:30:43.741613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:43.741635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.158 [2024-10-15 08:30:43.741650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:43.741706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.158 [2024-10-15 08:30:43.741723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:43.741745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.158 [2024-10-15 08:30:43.741760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:43.741781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.158 [2024-10-15 08:30:43.741796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:43.741817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.158 [2024-10-15 08:30:43.741833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:43.741854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.158 [2024-10-15 08:30:43.741869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:43.741890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.158 [2024-10-15 08:30:43.741905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:43.741927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.158 [2024-10-15 08:30:43.741942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:43.741963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.158 [2024-10-15 08:30:43.741978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:43.741999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.158 [2024-10-15 08:30:43.742014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:43.742035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.158 [2024-10-15 08:30:43.742050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:43.742071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.158 [2024-10-15 08:30:43.742086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:43.742111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.158 [2024-10-15 08:30:43.742144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:43.742188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.158 [2024-10-15 08:30:43.742207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:43.742229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.158 [2024-10-15 08:30:43.742245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:43.742267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.158 [2024-10-15 08:30:43.742282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:43.742303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.158 [2024-10-15 08:30:43.742319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:03.158 [2024-10-15 08:30:43.742341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.159 [2024-10-15 08:30:43.742357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.742388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.159 [2024-10-15 08:30:43.742403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.742456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.159 [2024-10-15 08:30:43.742477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.742495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.159 [2024-10-15 08:30:43.742509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.742525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.159 [2024-10-15 08:30:43.742540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.742556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.159 [2024-10-15 08:30:43.742571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.742586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.159 [2024-10-15 08:30:43.742601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.742616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.159 [2024-10-15 08:30:43.742630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.742646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.159 [2024-10-15 08:30:43.742669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.742686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.159 [2024-10-15 08:30:43.742700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.742716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.159 [2024-10-15 08:30:43.742730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.742746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.159 [2024-10-15 08:30:43.742761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.742776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.159 [2024-10-15 08:30:43.742790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.742806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.159 [2024-10-15 08:30:43.742820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.742836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.159 [2024-10-15 08:30:43.742851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.742866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.159 [2024-10-15 08:30:43.742880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.742897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.159 [2024-10-15 08:30:43.742912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.742927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.159 [2024-10-15 08:30:43.742941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.742957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.159 [2024-10-15 08:30:43.742971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.742987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.159 [2024-10-15 08:30:43.743001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.743017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.159 [2024-10-15 08:30:43.743031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.743053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.159 [2024-10-15 08:30:43.743067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.743083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.159 [2024-10-15 08:30:43.743097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.743112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.159 [2024-10-15 08:30:43.743141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.743158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.159 [2024-10-15 08:30:43.743173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.743188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.159 [2024-10-15 08:30:43.743202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.743218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.159 [2024-10-15 08:30:43.743232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.743249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.159 [2024-10-15 08:30:43.743263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.743279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.159 [2024-10-15 08:30:43.743293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.743309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.159 [2024-10-15 08:30:43.743323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.743338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.159 [2024-10-15 08:30:43.743353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.743369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.159 [2024-10-15 08:30:43.743384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.743399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.159 [2024-10-15 08:30:43.743414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.743429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.159 [2024-10-15 08:30:43.743450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.743467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.159 [2024-10-15 08:30:43.743481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.743496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.159 [2024-10-15 08:30:43.743511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.743526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.159 [2024-10-15 08:30:43.743541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.743556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.159 [2024-10-15 08:30:43.743570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.743586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.159 [2024-10-15 08:30:43.743600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.159 [2024-10-15 08:30:43.743616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.160 [2024-10-15 08:30:43.743630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.743646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.160 [2024-10-15 08:30:43.743660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.743676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.160 [2024-10-15 08:30:43.743690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.743706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.160 [2024-10-15 08:30:43.743720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.743736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.160 [2024-10-15 08:30:43.743750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.743766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.160 [2024-10-15 08:30:43.743780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.743796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.160 [2024-10-15 08:30:43.743810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.743825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.160 [2024-10-15 08:30:43.743847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.743864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.160 [2024-10-15 08:30:43.743878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.743893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.160 [2024-10-15 08:30:43.743908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.743923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.160 [2024-10-15 08:30:43.743937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.743953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.160 [2024-10-15 08:30:43.743967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.743982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.160 [2024-10-15 08:30:43.743997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.160 [2024-10-15 08:30:43.744027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.160 [2024-10-15 08:30:43.744056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.160 [2024-10-15 08:30:43.744086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.160 [2024-10-15 08:30:43.744128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.160 [2024-10-15 08:30:43.744161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.160 [2024-10-15 08:30:43.744191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.160 [2024-10-15 08:30:43.744221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.160 [2024-10-15 08:30:43.744258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.160 [2024-10-15 08:30:43.744288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.160 [2024-10-15 08:30:43.744318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.160 [2024-10-15 08:30:43.744348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.160 [2024-10-15 08:30:43.744378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.160 [2024-10-15 08:30:43.744408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.160 [2024-10-15 08:30:43.744438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.160 [2024-10-15 08:30:43.744468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.160 [2024-10-15 08:30:43.744498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.160 [2024-10-15 08:30:43.744528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.160 [2024-10-15 08:30:43.744557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.160 [2024-10-15 08:30:43.744587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.160 [2024-10-15 08:30:43.744624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.160 [2024-10-15 08:30:43.744655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.160 [2024-10-15 08:30:43.744684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.160 [2024-10-15 08:30:43.744714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.160 [2024-10-15 08:30:43.744730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.161 [2024-10-15 08:30:43.744744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.744760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.161 [2024-10-15 08:30:43.744774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.744790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.161 [2024-10-15 08:30:43.744804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.744820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.161 [2024-10-15 08:30:43.744834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.744850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.161 [2024-10-15 08:30:43.744864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.744880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.161 [2024-10-15 08:30:43.744894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.744909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.161 [2024-10-15 08:30:43.744924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.744939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.161 [2024-10-15 08:30:43.744953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.744968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.161 [2024-10-15 08:30:43.744982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:03.161 *: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.745004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.161 [2024-10-15 08:30:43.745018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.745034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.161 [2024-10-15 08:30:43.745048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.745064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.161 [2024-10-15 08:30:43.745078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.745093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.161 [2024-10-15 08:30:43.745107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.745134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.161 [2024-10-15 08:30:43.745150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.745165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9320 is same with the state(6) to be set 00:20:03.161 [2024-10-15 08:30:43.745186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.161 [2024-10-15 08:30:43.745197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.161 [2024-10-15 08:30:43.745209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107488 len:8 PRP1 0x0 PRP2 0x0 00:20:03.161 [2024-10-15 08:30:43.745223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.745238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.161 [2024-10-15 08:30:43.745249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.161 [2024-10-15 08:30:43.745259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107944 len:8 PRP1 0x0 PRP2 0x0 00:20:03.161 [2024-10-15 08:30:43.745273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.745287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.161 [2024-10-15 08:30:43.745299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.161 [2024-10-15 08:30:43.745310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107952 len:8 PRP1 0x0 PRP2 0x0 00:20:03.161 [2024-10-15 08:30:43.745324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.745337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.161 [2024-10-15 08:30:43.745348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.161 [2024-10-15 08:30:43.745359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107960 len:8 PRP1 0x0 PRP2 0x0 00:20:03.161 [2024-10-15 08:30:43.745372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.745386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.161 [2024-10-15 08:30:43.745396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.161 [2024-10-15 08:30:43.745417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107968 len:8 PRP1 0x0 PRP2 0x0 00:20:03.161 [2024-10-15 08:30:43.745432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.745447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.161 [2024-10-15 08:30:43.745457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.161 [2024-10-15 08:30:43.745468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107976 len:8 PRP1 0x0 PRP2 0x0 00:20:03.161 [2024-10-15 08:30:43.745481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.745495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.161 [2024-10-15 08:30:43.745506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.161 [2024-10-15 08:30:43.745516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107984 len:8 PRP1 0x0 PRP2 0x0 00:20:03.161 [2024-10-15 08:30:43.745530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.745544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.161 [2024-10-15 08:30:43.745555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.161 [2024-10-15 08:30:43.745566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107992 len:8 PRP1 0x0 PRP2 0x0 00:20:03.161 [2024-10-15 08:30:43.745579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.745592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.161 [2024-10-15 08:30:43.745603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.161 [2024-10-15 08:30:43.745613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108000 len:8 PRP1 0x0 PRP2 0x0 00:20:03.161 [2024-10-15 08:30:43.745627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.745641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.161 [2024-10-15 08:30:43.745651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.161 [2024-10-15 08:30:43.745661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107496 len:8 PRP1 0x0 PRP2 0x0 00:20:03.161 [2024-10-15 08:30:43.745675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.161 [2024-10-15 08:30:43.745689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.162 [2024-10-15 08:30:43.745712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.162 [2024-10-15 08:30:43.745724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107504 len:8 PRP1 0x0 PRP2 0x0 00:20:03.162 [2024-10-15 08:30:43.745737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.162 [2024-10-15 08:30:43.745751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.162 [2024-10-15 08:30:43.745762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.162 [2024-10-15 08:30:43.745772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107512 len:8 PRP1 0x0 PRP2 0x0 00:20:03.162 [2024-10-15 08:30:43.745786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.162 [2024-10-15 08:30:43.745805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.162 [2024-10-15 08:30:43.745816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.162 [2024-10-15 08:30:43.745827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107520 len:8 PRP1 0x0 PRP2 0x0 00:20:03.162 [2024-10-15 08:30:43.745841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.162 [2024-10-15 08:30:43.745855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.162 [2024-10-15 08:30:43.745865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.162 [2024-10-15 08:30:43.745875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107528 len:8 PRP1 0x0 PRP2 0x0 00:20:03.162 [2024-10-15 08:30:43.745889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.162 [2024-10-15 08:30:43.745903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.162 [2024-10-15 08:30:43.745913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.162 [2024-10-15 08:30:43.745924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107536 len:8 PRP1 0x0 PRP2 0x0 00:20:03.162 [2024-10-15 08:30:43.745937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.162 [2024-10-15 08:30:43.745959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.162 [2024-10-15 08:30:43.745970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.162 [2024-10-15 08:30:43.745981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107544 len:8 PRP1 0x0 PRP2 0x0 00:20:03.162 [2024-10-15 08:30:43.745994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.162 [2024-10-15 08:30:43.746008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:03.162 [2024-10-15 08:30:43.746019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:03.162 [2024-10-15 08:30:43.746029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107552 len:8 PRP1 0x0 PRP2 0x0 00:20:03.162 [2024-10-15 08:30:43.746042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.162 [2024-10-15 08:30:43.746108] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xce9320 was disconnected and freed. reset controller. 00:20:03.162 [2024-10-15 08:30:43.746240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.162 [2024-10-15 08:30:43.746277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.162 [2024-10-15 08:30:43.746294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.162 [2024-10-15 08:30:43.746308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.162 [2024-10-15 08:30:43.746329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.162 [2024-10-15 08:30:43.746344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.162 [2024-10-15 08:30:43.746358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.162 [2024-10-15 08:30:43.746372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.162 [2024-10-15 08:30:43.746398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:03.162 [2024-10-15 08:30:43.746413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.162 [2024-10-15 08:30:43.746434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5af50 is same with the state(6) to be set 00:20:03.162 [2024-10-15 08:30:43.747614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:03.162 [2024-10-15 08:30:43.747655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5af50 (9): Bad file descriptor 00:20:03.162 [2024-10-15 08:30:43.748070] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:03.162 [2024-10-15 08:30:43.748103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc5af50 with addr=10.0.0.3, port=4421 00:20:03.162 [2024-10-15 08:30:43.748136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5af50 is same with the state(6) to be set 00:20:03.162 [2024-10-15 08:30:43.748173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5af50 (9): Bad file descriptor 00:20:03.162 [2024-10-15 08:30:43.748205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:03.162 [2024-10-15 08:30:43.748223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:03.162 [2024-10-15 08:30:43.748239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:03.162 [2024-10-15 08:30:43.748273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:03.162 [2024-10-15 08:30:43.748291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:03.162 6823.50 IOPS, 26.65 MiB/s [2024-10-15T08:31:04.893Z] 6874.32 IOPS, 26.85 MiB/s [2024-10-15T08:31:04.893Z] 6922.68 IOPS, 27.04 MiB/s [2024-10-15T08:31:04.893Z] 6969.18 IOPS, 27.22 MiB/s [2024-10-15T08:31:04.893Z] 7010.55 IOPS, 27.38 MiB/s [2024-10-15T08:31:04.893Z] 7049.90 IOPS, 27.54 MiB/s [2024-10-15T08:31:04.893Z] 7088.14 IOPS, 27.69 MiB/s [2024-10-15T08:31:04.893Z] 7121.81 IOPS, 27.82 MiB/s [2024-10-15T08:31:04.893Z] 7156.32 IOPS, 27.95 MiB/s [2024-10-15T08:31:04.893Z] 7189.29 IOPS, 28.08 MiB/s [2024-10-15T08:31:04.893Z] [2024-10-15 08:30:53.809370] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:03.162 7220.43 IOPS, 28.20 MiB/s [2024-10-15T08:31:04.893Z] 7252.00 IOPS, 28.33 MiB/s [2024-10-15T08:31:04.893Z] 7282.25 IOPS, 28.45 MiB/s [2024-10-15T08:31:04.893Z] 7312.43 IOPS, 28.56 MiB/s [2024-10-15T08:31:04.893Z] 7340.26 IOPS, 28.67 MiB/s [2024-10-15T08:31:04.893Z] 7365.10 IOPS, 28.77 MiB/s [2024-10-15T08:31:04.893Z] 7389.15 IOPS, 28.86 MiB/s [2024-10-15T08:31:04.893Z] 7410.94 IOPS, 28.95 MiB/s [2024-10-15T08:31:04.893Z] 7431.80 IOPS, 29.03 MiB/s [2024-10-15T08:31:04.893Z] 7457.82 IOPS, 29.13 MiB/s [2024-10-15T08:31:04.893Z] Received shutdown signal, test time was about 55.964795 seconds 00:20:03.162 00:20:03.162 Latency(us) 00:20:03.162 [2024-10-15T08:31:04.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.162 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:03.162 Verification LBA range: start 0x0 length 0x4000 00:20:03.162 Nvme0n1 : 55.96 7478.17 29.21 0.00 0.00 17084.55 178.73 7046430.72 00:20:03.162 [2024-10-15T08:31:04.893Z] =================================================================================================================== 00:20:03.162 [2024-10-15T08:31:04.893Z] Total : 7478.17 29.21 0.00 0.00 17084.55 178.73 7046430.72 00:20:03.162 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:20:03.162 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:03.162 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:20:03.162 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:03.162 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:20:03.162 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:03.162 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:20:03.162 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:03.162 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:03.162 rmmod nvme_tcp 00:20:03.162 rmmod nvme_fabrics 00:20:03.162 rmmod nvme_keyring 00:20:03.162 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:03.162 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:20:03.162 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:20:03.162 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@515 -- # '[' -n 81075 ']' 00:20:03.162 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # killprocess 81075 00:20:03.162 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 81075 ']' 00:20:03.162 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 81075 00:20:03.162 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:20:03.162 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:03.162 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81075 00:20:03.162 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:03.162 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:03.162 killing process with pid 81075 00:20:03.163 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81075' 00:20:03.163 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 81075 00:20:03.163 08:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 81075 00:20:03.421 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:03.421 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:03.421 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:03.421 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:20:03.421 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:03.421 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@789 -- # iptables-save 00:20:03.421 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:20:03.421 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:03.421 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:03.421 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:03.421 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:03.421 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:03.421 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:03.421 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:03.421 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:03.680 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:03.680 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:03.680 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:03.680 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:03.680 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:03.680 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:03.680 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:03.680 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:03.680 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.680 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.680 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.680 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:20:03.680 00:20:03.680 real 1m2.449s 00:20:03.680 user 2m53.414s 00:20:03.680 sys 0m18.692s 00:20:03.680 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:03.680 08:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:03.680 ************************************ 00:20:03.680 END TEST nvmf_host_multipath 00:20:03.680 ************************************ 00:20:03.680 08:31:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:03.680 08:31:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:03.680 08:31:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:03.681 08:31:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.681 ************************************ 00:20:03.681 START TEST nvmf_timeout 00:20:03.681 ************************************ 00:20:03.681 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:03.940 * Looking for test storage... 00:20:03.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:20:03.940 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:03.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.941 --rc genhtml_branch_coverage=1 00:20:03.941 --rc genhtml_function_coverage=1 00:20:03.941 --rc genhtml_legend=1 00:20:03.941 --rc geninfo_all_blocks=1 00:20:03.941 --rc geninfo_unexecuted_blocks=1 00:20:03.941 00:20:03.941 ' 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:03.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.941 --rc genhtml_branch_coverage=1 00:20:03.941 --rc genhtml_function_coverage=1 00:20:03.941 --rc genhtml_legend=1 00:20:03.941 --rc geninfo_all_blocks=1 00:20:03.941 --rc geninfo_unexecuted_blocks=1 00:20:03.941 00:20:03.941 ' 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:03.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.941 --rc genhtml_branch_coverage=1 00:20:03.941 --rc genhtml_function_coverage=1 00:20:03.941 --rc genhtml_legend=1 00:20:03.941 --rc geninfo_all_blocks=1 00:20:03.941 --rc geninfo_unexecuted_blocks=1 00:20:03.941 00:20:03.941 ' 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:03.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.941 --rc genhtml_branch_coverage=1 00:20:03.941 --rc genhtml_function_coverage=1 00:20:03.941 --rc genhtml_legend=1 00:20:03.941 --rc geninfo_all_blocks=1 00:20:03.941 --rc geninfo_unexecuted_blocks=1 00:20:03.941 00:20:03.941 ' 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:03.941 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@458 -- # nvmf_veth_init 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:03.941 Cannot find device "nvmf_init_br" 00:20:03.941 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:20:03.942 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:03.942 Cannot find device "nvmf_init_br2" 00:20:03.942 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:20:03.942 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:03.942 Cannot find device "nvmf_tgt_br" 00:20:03.942 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:20:03.942 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:03.942 Cannot find device "nvmf_tgt_br2" 00:20:03.942 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:20:03.942 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:03.942 Cannot find device "nvmf_init_br" 00:20:03.942 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:20:03.942 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:03.942 Cannot find device "nvmf_init_br2" 00:20:03.942 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:20:03.942 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:03.942 Cannot find device "nvmf_tgt_br" 00:20:03.942 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:20:03.942 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:03.942 Cannot find device "nvmf_tgt_br2" 00:20:03.942 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:20:03.942 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:03.942 Cannot find device "nvmf_br" 00:20:03.942 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:20:03.942 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:03.942 Cannot find device "nvmf_init_if" 00:20:03.942 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:20:03.942 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:04.200 Cannot find device "nvmf_init_if2" 00:20:04.200 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:20:04.200 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:04.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.200 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:20:04.200 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:04.200 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.200 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:20:04.200 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:04.200 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:04.200 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:04.201 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:04.201 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:20:04.201 00:20:04.201 --- 10.0.0.3 ping statistics --- 00:20:04.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.201 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:04.201 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:04.201 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:20:04.201 00:20:04.201 --- 10.0.0.4 ping statistics --- 00:20:04.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.201 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:04.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:04.201 00:20:04.201 --- 10.0.0.1 ping statistics --- 00:20:04.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.201 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:04.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:20:04.201 00:20:04.201 --- 10.0.0.2 ping statistics --- 00:20:04.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.201 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # return 0 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # nvmfpid=82300 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # waitforlisten 82300 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 82300 ']' 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:04.201 08:31:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:04.460 [2024-10-15 08:31:05.971928] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:20:04.460 [2024-10-15 08:31:05.972048] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.460 [2024-10-15 08:31:06.113037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:04.719 [2024-10-15 08:31:06.198957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.719 [2024-10-15 08:31:06.199270] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.719 [2024-10-15 08:31:06.199452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.719 [2024-10-15 08:31:06.199612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.719 [2024-10-15 08:31:06.199658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.719 [2024-10-15 08:31:06.201307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.719 [2024-10-15 08:31:06.201323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.719 [2024-10-15 08:31:06.278944] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:05.286 08:31:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:05.286 08:31:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:20:05.286 08:31:06 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:05.286 08:31:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:05.286 08:31:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:05.286 08:31:07 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.286 08:31:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:05.286 08:31:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:05.545 [2024-10-15 08:31:07.242887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.545 08:31:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:05.804 Malloc0 00:20:06.062 08:31:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:06.321 08:31:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:06.580 08:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:06.839 [2024-10-15 08:31:08.342535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:06.839 08:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82349 00:20:06.839 08:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:06.839 08:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82349 /var/tmp/bdevperf.sock 00:20:06.839 08:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 82349 ']' 00:20:06.839 08:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.839 08:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:06.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.839 08:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.839 08:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:06.839 08:31:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:06.839 [2024-10-15 08:31:08.415317] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:20:06.839 [2024-10-15 08:31:08.415428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82349 ] 00:20:06.839 [2024-10-15 08:31:08.553908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.098 [2024-10-15 08:31:08.638141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.098 [2024-10-15 08:31:08.713482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:08.034 08:31:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:08.034 08:31:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:20:08.034 08:31:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:08.292 08:31:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:08.550 NVMe0n1 00:20:08.550 08:31:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:08.550 08:31:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82378 00:20:08.550 08:31:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:20:08.550 Running I/O for 10 seconds... 00:20:09.520 08:31:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:09.780 6932.00 IOPS, 27.08 MiB/s [2024-10-15T08:31:11.512Z] [2024-10-15 08:31:11.378725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.781 [2024-10-15 08:31:11.378802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.378832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.378844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.378856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.378867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.378879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.378889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.378900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.378909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.378920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.378930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.378941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.378950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.378961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.378970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.378982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.378990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.781 [2024-10-15 08:31:11.379601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.781 [2024-10-15 08:31:11.379612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.379621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.379632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.379642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.379653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.379662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.379673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.379682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.379693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.379704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.379716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.379725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.379735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.379745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.379756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.379765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.379777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.379786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.379797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.379807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.379818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.379827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.379840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.379850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.379862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.379871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.379883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.379892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.379903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.379912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.379923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.379933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.379944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.379953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.379964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.379974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.379985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.379996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.380008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.380017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.380028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.380037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.380048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.380059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.380070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.380080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.380091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.380101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.380112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.380136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.380149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.380159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.380170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.380179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.380191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.380201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.380213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.380223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.380234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.380244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.380255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.380265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.380276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.380285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.380296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.380305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.380317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.380326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.380337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.380347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.380358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.380368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.782 [2024-10-15 08:31:11.380379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.782 [2024-10-15 08:31:11.380388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.380980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.380991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.381001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.381012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.381022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.381033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.381042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.381053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.381062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.381073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.381082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.381094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.381103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.381133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.381145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.381156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.381166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.381177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.381186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.381198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.783 [2024-10-15 08:31:11.381207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.381218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.783 [2024-10-15 08:31:11.381227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.783 [2024-10-15 08:31:11.381239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-10-15 08:31:11.381248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.784 [2024-10-15 08:31:11.381265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-10-15 08:31:11.381275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.784 [2024-10-15 08:31:11.381286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-10-15 08:31:11.381296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.784 [2024-10-15 08:31:11.381308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-10-15 08:31:11.381318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.784 [2024-10-15 08:31:11.381330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-10-15 08:31:11.381339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.784 [2024-10-15 08:31:11.381350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-10-15 08:31:11.381360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.784 [2024-10-15 08:31:11.381371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-10-15 08:31:11.381380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.784 [2024-10-15 08:31:11.381391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-10-15 08:31:11.381400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.784 [2024-10-15 08:31:11.381411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-10-15 08:31:11.381421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.784 [2024-10-15 08:31:11.381432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-10-15 08:31:11.381441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.784 [2024-10-15 08:31:11.381452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-10-15 08:31:11.381461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.784 [2024-10-15 08:31:11.381479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-10-15 08:31:11.381489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.784 [2024-10-15 08:31:11.381500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-10-15 08:31:11.381509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.784 [2024-10-15 08:31:11.381520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-10-15 08:31:11.381530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.784 [2024-10-15 08:31:11.381541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:09.784 [2024-10-15 08:31:11.381550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.784 [2024-10-15 08:31:11.381560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ab5d0 is same with the state(6) to be set 00:20:09.784 [2024-10-15 08:31:11.381573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:09.784 [2024-10-15 08:31:11.381581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:09.784 [2024-10-15 08:31:11.381589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64664 len:8 PRP1 0x0 PRP2 0x0 00:20:09.784 [2024-10-15 08:31:11.381606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.784 [2024-10-15 08:31:11.381672] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8ab5d0 was disconnected and freed. reset controller. 00:20:09.784 [2024-10-15 08:31:11.381779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.784 [2024-10-15 08:31:11.381796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.784 [2024-10-15 08:31:11.381808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.784 [2024-10-15 08:31:11.381817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.784 [2024-10-15 08:31:11.381827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.784 [2024-10-15 08:31:11.381836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.784 [2024-10-15 08:31:11.381846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.784 [2024-10-15 08:31:11.381855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.784 [2024-10-15 08:31:11.381864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d2e0 is same with the state(6) to be set 00:20:09.784 [2024-10-15 08:31:11.382081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:09.784 [2024-10-15 08:31:11.382103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83d2e0 (9): Bad file descriptor 00:20:09.784 [2024-10-15 08:31:11.382236] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.784 [2024-10-15 08:31:11.382260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x83d2e0 with addr=10.0.0.3, port=4420 00:20:09.784 [2024-10-15 08:31:11.382272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d2e0 is same with the state(6) to be set 00:20:09.784 [2024-10-15 08:31:11.382293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83d2e0 (9): Bad file descriptor 00:20:09.784 [2024-10-15 08:31:11.382310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:09.784 [2024-10-15 08:31:11.382319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:09.784 [2024-10-15 08:31:11.382331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:09.784 [2024-10-15 08:31:11.382363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.784 [2024-10-15 08:31:11.382375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:09.784 08:31:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:20:11.657 3978.00 IOPS, 15.54 MiB/s [2024-10-15T08:31:13.388Z] 2652.00 IOPS, 10.36 MiB/s [2024-10-15T08:31:13.388Z] [2024-10-15 08:31:13.382780] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.657 [2024-10-15 08:31:13.382909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x83d2e0 with addr=10.0.0.3, port=4420 00:20:11.657 [2024-10-15 08:31:13.382935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d2e0 is same with the state(6) to be set 00:20:11.657 [2024-10-15 08:31:13.382969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83d2e0 (9): Bad file descriptor 00:20:11.657 [2024-10-15 08:31:13.382990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:11.657 [2024-10-15 08:31:13.383002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:11.657 [2024-10-15 08:31:13.383014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:11.657 [2024-10-15 08:31:13.383046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:11.657 [2024-10-15 08:31:13.383060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:11.916 08:31:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:20:11.916 08:31:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:11.916 08:31:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:12.174 08:31:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:20:12.174 08:31:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:20:12.174 08:31:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:12.174 08:31:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:12.432 08:31:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:20:12.432 08:31:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:20:13.626 1989.00 IOPS, 7.77 MiB/s [2024-10-15T08:31:15.615Z] 1591.20 IOPS, 6.22 MiB/s [2024-10-15T08:31:15.615Z] [2024-10-15 08:31:15.383298] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:13.884 [2024-10-15 08:31:15.383391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x83d2e0 with addr=10.0.0.3, port=4420 00:20:13.884 [2024-10-15 08:31:15.383413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d2e0 is same with the state(6) to be set 00:20:13.884 [2024-10-15 08:31:15.383447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83d2e0 (9): Bad file descriptor 00:20:13.884 [2024-10-15 08:31:15.383469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:13.884 [2024-10-15 08:31:15.383480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:13.884 [2024-10-15 08:31:15.383492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:13.884 [2024-10-15 08:31:15.383532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:13.884 [2024-10-15 08:31:15.383544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:15.845 1326.00 IOPS, 5.18 MiB/s [2024-10-15T08:31:17.576Z] 1136.57 IOPS, 4.44 MiB/s [2024-10-15T08:31:17.576Z] [2024-10-15 08:31:17.383753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:15.845 [2024-10-15 08:31:17.383841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:15.845 [2024-10-15 08:31:17.383864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:15.845 [2024-10-15 08:31:17.383893] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:15.845 [2024-10-15 08:31:17.383943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:16.778 994.50 IOPS, 3.88 MiB/s 00:20:16.778 Latency(us) 00:20:16.778 [2024-10-15T08:31:18.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.778 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:16.778 Verification LBA range: start 0x0 length 0x4000 00:20:16.778 NVMe0n1 : 8.16 975.36 3.81 15.69 0.00 128973.61 3768.32 7015926.69 00:20:16.778 [2024-10-15T08:31:18.509Z] =================================================================================================================== 00:20:16.778 [2024-10-15T08:31:18.509Z] Total : 975.36 3.81 15.69 0.00 128973.61 3768.32 7015926.69 00:20:16.778 { 00:20:16.778 "results": [ 00:20:16.778 { 00:20:16.778 "job": "NVMe0n1", 00:20:16.779 "core_mask": "0x4", 00:20:16.779 "workload": "verify", 00:20:16.779 "status": "finished", 00:20:16.779 "verify_range": { 00:20:16.779 "start": 0, 00:20:16.779 "length": 16384 00:20:16.779 }, 00:20:16.779 "queue_depth": 128, 00:20:16.779 "io_size": 4096, 00:20:16.779 "runtime": 8.156963, 00:20:16.779 "iops": 975.363011944519, 00:20:16.779 "mibps": 3.8100117654082775, 00:20:16.779 "io_failed": 128, 00:20:16.779 "io_timeout": 0, 00:20:16.779 "avg_latency_us": 128973.6120696325, 00:20:16.779 "min_latency_us": 3768.32, 00:20:16.779 "max_latency_us": 7015926.69090909 00:20:16.779 } 00:20:16.779 ], 00:20:16.779 "core_count": 1 00:20:16.779 } 00:20:17.344 08:31:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:20:17.344 08:31:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:17.344 08:31:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:17.911 08:31:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:20:17.911 08:31:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:20:17.911 08:31:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:17.911 08:31:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:17.911 08:31:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:20:17.911 08:31:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82378 00:20:17.911 08:31:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82349 00:20:17.911 08:31:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 82349 ']' 00:20:17.911 08:31:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 82349 00:20:17.911 08:31:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:20:17.911 08:31:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:17.911 08:31:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82349 00:20:17.911 08:31:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:18.170 killing process with pid 82349 00:20:18.170 Received shutdown signal, test time was about 9.415153 seconds 00:20:18.170 00:20:18.170 Latency(us) 00:20:18.170 [2024-10-15T08:31:19.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.170 [2024-10-15T08:31:19.901Z] =================================================================================================================== 00:20:18.170 [2024-10-15T08:31:19.901Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:18.170 08:31:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:18.170 08:31:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82349' 00:20:18.170 08:31:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 82349 00:20:18.170 08:31:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 82349 00:20:18.429 08:31:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:18.429 [2024-10-15 08:31:20.148232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:18.687 08:31:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82497 00:20:18.687 08:31:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82497 /var/tmp/bdevperf.sock 00:20:18.687 08:31:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:18.688 08:31:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 82497 ']' 00:20:18.688 08:31:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.688 08:31:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:18.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.688 08:31:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.688 08:31:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:18.688 08:31:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:18.688 [2024-10-15 08:31:20.218364] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:20:18.688 [2024-10-15 08:31:20.218465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82497 ] 00:20:18.688 [2024-10-15 08:31:20.355191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.946 [2024-10-15 08:31:20.434433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.946 [2024-10-15 08:31:20.510839] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:18.946 08:31:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:18.946 08:31:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:20:18.946 08:31:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:19.205 08:31:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:20:19.772 NVMe0n1 00:20:19.772 08:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82518 00:20:19.772 08:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:19.772 08:31:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:20:19.772 Running I/O for 10 seconds... 00:20:20.707 08:31:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:20.968 6549.00 IOPS, 25.58 MiB/s [2024-10-15T08:31:22.699Z] [2024-10-15 08:31:22.472035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.968 [2024-10-15 08:31:22.472128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.968 [2024-10-15 08:31:22.472157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.968 [2024-10-15 08:31:22.472168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.968 [2024-10-15 08:31:22.472181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.968 [2024-10-15 08:31:22.472191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.968 [2024-10-15 08:31:22.472203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.968 [2024-10-15 08:31:22.472212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.968 [2024-10-15 08:31:22.472224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.968 [2024-10-15 08:31:22.472233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.968 [2024-10-15 08:31:22.472245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.968 [2024-10-15 08:31:22.472254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.968 [2024-10-15 08:31:22.472266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.968 [2024-10-15 08:31:22.472276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.968 [2024-10-15 08:31:22.472287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.968 [2024-10-15 08:31:22.472296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.968 [2024-10-15 08:31:22.472308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.968 [2024-10-15 08:31:22.472317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.968 [2024-10-15 08:31:22.472328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.968 [2024-10-15 08:31:22.472338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.968 [2024-10-15 08:31:22.472359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.968 [2024-10-15 08:31:22.472369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.968 [2024-10-15 08:31:22.472380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.968 [2024-10-15 08:31:22.472389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.968 [2024-10-15 08:31:22.472400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.968 [2024-10-15 08:31:22.472409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.968 [2024-10-15 08:31:22.472420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.968 [2024-10-15 08:31:22.472430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.968 [2024-10-15 08:31:22.472441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.968 [2024-10-15 08:31:22.472450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.968 [2024-10-15 08:31:22.472461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.472987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.472996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.473007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.473017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.473028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.473037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.473048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.473057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.473068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.473086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.473097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.473106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.473127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.473137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.473149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.473158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.473170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.473180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.473192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.473202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.473213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.473222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.473233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.473242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.473253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.473263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.473274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.473283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.473294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.473304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.473315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.473324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.473335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.969 [2024-10-15 08:31:22.473344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.969 [2024-10-15 08:31:22.473356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.473988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.473998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.474009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.474018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.474030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.474047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.474058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.474067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.474078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.474087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.474098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.474110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.474131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.970 [2024-10-15 08:31:22.474142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.474152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdc5d0 is same with the state(6) to be set 00:20:20.970 [2024-10-15 08:31:22.474165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.970 [2024-10-15 08:31:22.474173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.970 [2024-10-15 08:31:22.474194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60680 len:8 PRP1 0x0 PRP2 0x0 00:20:20.970 [2024-10-15 08:31:22.474204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.474215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.970 [2024-10-15 08:31:22.474233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.970 [2024-10-15 08:31:22.474242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59792 len:8 PRP1 0x0 PRP2 0x0 00:20:20.970 [2024-10-15 08:31:22.474251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.970 [2024-10-15 08:31:22.474261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.970 [2024-10-15 08:31:22.474269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.970 [2024-10-15 08:31:22.474276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59800 len:8 PRP1 0x0 PRP2 0x0 00:20:20.970 [2024-10-15 08:31:22.474286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.474296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.474303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.474312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59808 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.474321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.474331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.474338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.474346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59816 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.474355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.474365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.474372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.474379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59824 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.474388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.474398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.474405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.474423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59832 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.474432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.474442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.474449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.474457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59840 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.474466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.474475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.474482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.474489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60688 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.474498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.474507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.474520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.474527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60696 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.474536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.474546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.474553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.474561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59848 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.474571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.474580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.474587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.474595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59856 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.474603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.474612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.474619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.474627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59864 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.474636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.474645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.474652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.474660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59872 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.474669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.474678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.474685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.474698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59880 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.474707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.474716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.474723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.474731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59888 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.474740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.474749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.474756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.474764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59896 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.474773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.474783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.474795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.474803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59904 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.474811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.474820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.474828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.474835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59912 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.474845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.474854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.474861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.474868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59920 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.474877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.474886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.474893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.474901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59928 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.474910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.474919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.474926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.474943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59936 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.474951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.474960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.474967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.474982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59944 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.474991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.475000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.475008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.475016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59952 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.475027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.475036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.475043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.475052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59960 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.475061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.475070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.475083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.475091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59968 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.475100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.475129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.475138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.971 [2024-10-15 08:31:22.475146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59976 len:8 PRP1 0x0 PRP2 0x0 00:20:20.971 [2024-10-15 08:31:22.475155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.971 [2024-10-15 08:31:22.475165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.971 [2024-10-15 08:31:22.475172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.972 [2024-10-15 08:31:22.475179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59984 len:8 PRP1 0x0 PRP2 0x0 00:20:20.972 [2024-10-15 08:31:22.475189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.972 [2024-10-15 08:31:22.475199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.972 [2024-10-15 08:31:22.489256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.972 [2024-10-15 08:31:22.489305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60704 len:8 PRP1 0x0 PRP2 0x0 00:20:20.972 [2024-10-15 08:31:22.489323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.972 [2024-10-15 08:31:22.489355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.972 [2024-10-15 08:31:22.489369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.972 [2024-10-15 08:31:22.489381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60712 len:8 PRP1 0x0 PRP2 0x0 00:20:20.972 [2024-10-15 08:31:22.489394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.972 [2024-10-15 08:31:22.489407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.972 [2024-10-15 08:31:22.489424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.972 [2024-10-15 08:31:22.489436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60720 len:8 PRP1 0x0 PRP2 0x0 00:20:20.972 [2024-10-15 08:31:22.489449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.972 [2024-10-15 08:31:22.489462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.972 [2024-10-15 08:31:22.489472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.972 [2024-10-15 08:31:22.489483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60728 len:8 PRP1 0x0 PRP2 0x0 00:20:20.972 [2024-10-15 08:31:22.489496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.972 [2024-10-15 08:31:22.489509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.972 [2024-10-15 08:31:22.489519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.972 [2024-10-15 08:31:22.489530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60736 len:8 PRP1 0x0 PRP2 0x0 00:20:20.972 [2024-10-15 08:31:22.489543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.972 [2024-10-15 08:31:22.489556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:20.972 [2024-10-15 08:31:22.489567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:20.972 [2024-10-15 08:31:22.489578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60744 len:8 PRP1 0x0 PRP2 0x0 00:20:20.972 [2024-10-15 08:31:22.489604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.972 [2024-10-15 08:31:22.489676] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cdc5d0 was disconnected and freed. reset controller. 00:20:20.972 [2024-10-15 08:31:22.489815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:20.972 [2024-10-15 08:31:22.489833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.972 [2024-10-15 08:31:22.489845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:20.972 [2024-10-15 08:31:22.489854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.972 [2024-10-15 08:31:22.489864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:20.972 [2024-10-15 08:31:22.489874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.972 [2024-10-15 08:31:22.489884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:20.972 [2024-10-15 08:31:22.489893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.972 [2024-10-15 08:31:22.489903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6e2e0 is same with the state(6) to be set 00:20:20.972 [2024-10-15 08:31:22.490149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.972 [2024-10-15 08:31:22.490209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6e2e0 (9): Bad file descriptor 00:20:20.972 [2024-10-15 08:31:22.490324] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.972 [2024-10-15 08:31:22.490347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6e2e0 with addr=10.0.0.3, port=4420 00:20:20.972 [2024-10-15 08:31:22.490358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6e2e0 is same with the state(6) to be set 00:20:20.972 [2024-10-15 08:31:22.490377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6e2e0 (9): Bad file descriptor 00:20:20.972 [2024-10-15 08:31:22.490393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.972 [2024-10-15 08:31:22.490403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.972 [2024-10-15 08:31:22.490415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.972 [2024-10-15 08:31:22.490436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.972 [2024-10-15 08:31:22.490447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.972 08:31:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:20:21.916 3733.00 IOPS, 14.58 MiB/s [2024-10-15T08:31:23.647Z] [2024-10-15 08:31:23.490619] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.916 [2024-10-15 08:31:23.490706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6e2e0 with addr=10.0.0.3, port=4420 00:20:21.916 [2024-10-15 08:31:23.490725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6e2e0 is same with the state(6) to be set 00:20:21.916 [2024-10-15 08:31:23.490755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6e2e0 (9): Bad file descriptor 00:20:21.916 [2024-10-15 08:31:23.490777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.916 [2024-10-15 08:31:23.490788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.916 [2024-10-15 08:31:23.490801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.916 [2024-10-15 08:31:23.490832] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.916 [2024-10-15 08:31:23.490846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.916 08:31:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:22.175 [2024-10-15 08:31:23.835018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:22.175 08:31:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82518 00:20:23.017 2488.67 IOPS, 9.72 MiB/s [2024-10-15T08:31:24.748Z] [2024-10-15 08:31:24.509848] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:24.890 1866.50 IOPS, 7.29 MiB/s [2024-10-15T08:31:27.556Z] 2984.60 IOPS, 11.66 MiB/s [2024-10-15T08:31:28.505Z] 4072.50 IOPS, 15.91 MiB/s [2024-10-15T08:31:29.470Z] 4841.86 IOPS, 18.91 MiB/s [2024-10-15T08:31:30.407Z] 5424.38 IOPS, 21.19 MiB/s [2024-10-15T08:31:31.783Z] 5882.11 IOPS, 22.98 MiB/s [2024-10-15T08:31:31.783Z] 6259.50 IOPS, 24.45 MiB/s 00:20:30.052 Latency(us) 00:20:30.052 [2024-10-15T08:31:31.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.052 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:30.052 Verification LBA range: start 0x0 length 0x4000 00:20:30.052 NVMe0n1 : 10.01 6266.88 24.48 0.00 0.00 20382.66 1407.53 3035150.89 00:20:30.052 [2024-10-15T08:31:31.783Z] =================================================================================================================== 00:20:30.052 [2024-10-15T08:31:31.783Z] Total : 6266.88 24.48 0.00 0.00 20382.66 1407.53 3035150.89 00:20:30.052 { 00:20:30.052 "results": [ 00:20:30.052 { 00:20:30.052 "job": "NVMe0n1", 00:20:30.052 "core_mask": "0x4", 00:20:30.052 "workload": "verify", 00:20:30.052 "status": "finished", 00:20:30.052 "verify_range": { 00:20:30.052 "start": 0, 00:20:30.052 "length": 16384 00:20:30.052 }, 00:20:30.052 "queue_depth": 128, 00:20:30.052 "io_size": 4096, 00:20:30.052 "runtime": 10.008648, 00:20:30.052 "iops": 6266.880401828499, 00:20:30.052 "mibps": 24.480001569642575, 00:20:30.052 "io_failed": 0, 00:20:30.052 "io_timeout": 0, 00:20:30.052 "avg_latency_us": 20382.662436890627, 00:20:30.052 "min_latency_us": 1407.5345454545454, 00:20:30.052 "max_latency_us": 3035150.8945454545 00:20:30.052 } 00:20:30.052 ], 00:20:30.052 "core_count": 1 00:20:30.052 } 00:20:30.052 08:31:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82617 00:20:30.052 08:31:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:20:30.052 08:31:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:30.052 Running I/O for 10 seconds... 00:20:30.990 08:31:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:30.990 6827.00 IOPS, 26.67 MiB/s [2024-10-15T08:31:32.721Z] [2024-10-15 08:31:32.683638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.990 [2024-10-15 08:31:32.683735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.990 [2024-10-15 08:31:32.683777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.990 [2024-10-15 08:31:32.683789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.990 [2024-10-15 08:31:32.683817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.990 [2024-10-15 08:31:32.683827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.990 [2024-10-15 08:31:32.683838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.990 [2024-10-15 08:31:32.683848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.990 [2024-10-15 08:31:32.683859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.990 [2024-10-15 08:31:32.683868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.990 [2024-10-15 08:31:32.683878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.990 [2024-10-15 08:31:32.683887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.990 [2024-10-15 08:31:32.683898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.990 [2024-10-15 08:31:32.683907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.990 [2024-10-15 08:31:32.683918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.990 [2024-10-15 08:31:32.683927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.990 [2024-10-15 08:31:32.683937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:30.990 [2024-10-15 08:31:32.683946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.990 [2024-10-15 08:31:32.683957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.990 [2024-10-15 08:31:32.683966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.990 [2024-10-15 08:31:32.683977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.990 [2024-10-15 08:31:32.683986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.990 [2024-10-15 08:31:32.683997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.990 [2024-10-15 08:31:32.684006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.990 [2024-10-15 08:31:32.684017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.991 [2024-10-15 08:31:32.684819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.991 [2024-10-15 08:31:32.684828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.684839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.684848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.684860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.684869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.684880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.684890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.684901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.684910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.684921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.684930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.684941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.684950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.684961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.684970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.684982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.684991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.992 [2024-10-15 08:31:32.685606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.992 [2024-10-15 08:31:32.685618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.685627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.685638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.685648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.685660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.685669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.685680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.685689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.685700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.685709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.685720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.685729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.685740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.685749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.685760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.685769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.685780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.685790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.685801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.685812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.685824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.685833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.685850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.685859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.685871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.685880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.685891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.685900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.685911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.685920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.685931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.685941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.685952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.685961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.685973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.685982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.685993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.686002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.686014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.686023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.686034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.686043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.686055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.686074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.686085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.686095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.686106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.686124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.686137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.686146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.686158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.993 [2024-10-15 08:31:32.686167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.686178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0720 is same with the state(6) to be set 00:20:30.993 [2024-10-15 08:31:32.686202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:30.993 [2024-10-15 08:31:32.686216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:30.993 [2024-10-15 08:31:32.686224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64712 len:8 PRP1 0x0 PRP2 0x0 00:20:30.993 [2024-10-15 08:31:32.686234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.686244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:30.993 [2024-10-15 08:31:32.686252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:30.993 [2024-10-15 08:31:32.686260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64736 len:8 PRP1 0x0 PRP2 0x0 00:20:30.993 [2024-10-15 08:31:32.686269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.686278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:30.993 [2024-10-15 08:31:32.686285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:30.993 [2024-10-15 08:31:32.686298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64744 len:8 PRP1 0x0 PRP2 0x0 00:20:30.993 [2024-10-15 08:31:32.686307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.686317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:30.993 [2024-10-15 08:31:32.686324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:30.993 [2024-10-15 08:31:32.686341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64752 len:8 PRP1 0x0 PRP2 0x0 00:20:30.993 [2024-10-15 08:31:32.686350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.686360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:30.993 [2024-10-15 08:31:32.686367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:30.993 [2024-10-15 08:31:32.686375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64760 len:8 PRP1 0x0 PRP2 0x0 00:20:30.993 [2024-10-15 08:31:32.686384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.686393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:30.993 [2024-10-15 08:31:32.686400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:30.993 [2024-10-15 08:31:32.686414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64768 len:8 PRP1 0x0 PRP2 0x0 00:20:30.993 [2024-10-15 08:31:32.686423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.993 [2024-10-15 08:31:32.686432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:30.993 [2024-10-15 08:31:32.686440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:30.993 [2024-10-15 08:31:32.686448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64776 len:8 PRP1 0x0 PRP2 0x0 00:20:30.993 [2024-10-15 08:31:32.686456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.994 [2024-10-15 08:31:32.686465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:30.994 [2024-10-15 08:31:32.686472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:30.994 [2024-10-15 08:31:32.686480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64784 len:8 PRP1 0x0 PRP2 0x0 00:20:30.994 [2024-10-15 08:31:32.686489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.994 [2024-10-15 08:31:32.686498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:30.994 [2024-10-15 08:31:32.686510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:30.994 [2024-10-15 08:31:32.686518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64792 len:8 PRP1 0x0 PRP2 0x0 00:20:30.994 [2024-10-15 08:31:32.686527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.994 [2024-10-15 08:31:32.686537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:30.994 [2024-10-15 08:31:32.686544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:30.994 [2024-10-15 08:31:32.686552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64800 len:8 PRP1 0x0 PRP2 0x0 00:20:30.994 [2024-10-15 08:31:32.686561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.994 [2024-10-15 08:31:32.686571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:30.994 [2024-10-15 08:31:32.686578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:30.994 [2024-10-15 08:31:32.686585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64808 len:8 PRP1 0x0 PRP2 0x0 00:20:30.994 [2024-10-15 08:31:32.686594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.994 [2024-10-15 08:31:32.686603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:30.994 [2024-10-15 08:31:32.686610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:30.994 [2024-10-15 08:31:32.686618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64816 len:8 PRP1 0x0 PRP2 0x0 00:20:30.994 [2024-10-15 08:31:32.686626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.994 [2024-10-15 08:31:32.686636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:30.994 [2024-10-15 08:31:32.686643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:30.994 [2024-10-15 08:31:32.686650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64824 len:8 PRP1 0x0 PRP2 0x0 00:20:30.994 [2024-10-15 08:31:32.686659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.994 [2024-10-15 08:31:32.686668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:30.994 [2024-10-15 08:31:32.686675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:30.994 [2024-10-15 08:31:32.686688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64832 len:8 PRP1 0x0 PRP2 0x0 00:20:30.994 [2024-10-15 08:31:32.686697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.994 [2024-10-15 08:31:32.686707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:30.994 [2024-10-15 08:31:32.686714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:30.994 [2024-10-15 08:31:32.686722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64840 len:8 PRP1 0x0 PRP2 0x0 00:20:30.994 [2024-10-15 08:31:32.686731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.994 [2024-10-15 08:31:32.686740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:30.994 [2024-10-15 08:31:32.686747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:30.994 [2024-10-15 08:31:32.686755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64848 len:8 PRP1 0x0 PRP2 0x0 00:20:30.994 [2024-10-15 08:31:32.686764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.994 [2024-10-15 08:31:32.686827] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cf0720 was disconnected and freed. reset controller. 00:20:30.994 [2024-10-15 08:31:32.686908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.994 [2024-10-15 08:31:32.686925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.994 [2024-10-15 08:31:32.686936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.994 [2024-10-15 08:31:32.686945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.994 [2024-10-15 08:31:32.686955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.994 [2024-10-15 08:31:32.686965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.994 [2024-10-15 08:31:32.686975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.994 [2024-10-15 08:31:32.686985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.994 [2024-10-15 08:31:32.686994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6e2e0 is same with the state(6) to be set 00:20:30.994 [2024-10-15 08:31:32.687237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:30.994 [2024-10-15 08:31:32.687272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6e2e0 (9): Bad file descriptor 00:20:30.994 [2024-10-15 08:31:32.687382] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:30.994 [2024-10-15 08:31:32.687414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6e2e0 with addr=10.0.0.3, port=4420 00:20:30.994 [2024-10-15 08:31:32.687427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6e2e0 is same with the state(6) to be set 00:20:30.994 [2024-10-15 08:31:32.687445] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6e2e0 (9): Bad file descriptor 00:20:30.994 [2024-10-15 08:31:32.687462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:30.994 [2024-10-15 08:31:32.687471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:30.994 [2024-10-15 08:31:32.687482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:30.994 [2024-10-15 08:31:32.687502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:30.994 [2024-10-15 08:31:32.687514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:30.994 08:31:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:20:32.199 3989.50 IOPS, 15.58 MiB/s [2024-10-15T08:31:33.930Z] [2024-10-15 08:31:33.687689] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:32.199 [2024-10-15 08:31:33.687785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6e2e0 with addr=10.0.0.3, port=4420 00:20:32.199 [2024-10-15 08:31:33.687802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6e2e0 is same with the state(6) to be set 00:20:32.199 [2024-10-15 08:31:33.687832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6e2e0 (9): Bad file descriptor 00:20:32.199 [2024-10-15 08:31:33.687853] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:32.199 [2024-10-15 08:31:33.687863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:32.199 [2024-10-15 08:31:33.687875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:32.199 [2024-10-15 08:31:33.687907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:32.199 [2024-10-15 08:31:33.687919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:33.132 2659.67 IOPS, 10.39 MiB/s [2024-10-15T08:31:34.864Z] [2024-10-15 08:31:34.688082] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:33.133 [2024-10-15 08:31:34.688169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6e2e0 with addr=10.0.0.3, port=4420 00:20:33.133 [2024-10-15 08:31:34.688188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6e2e0 is same with the state(6) to be set 00:20:33.133 [2024-10-15 08:31:34.688216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6e2e0 (9): Bad file descriptor 00:20:33.133 [2024-10-15 08:31:34.688237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:33.133 [2024-10-15 08:31:34.688248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:33.133 [2024-10-15 08:31:34.688259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:33.133 [2024-10-15 08:31:34.688292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:33.133 [2024-10-15 08:31:34.688304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:34.067 1994.75 IOPS, 7.79 MiB/s [2024-10-15T08:31:35.798Z] [2024-10-15 08:31:35.692327] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:34.067 [2024-10-15 08:31:35.692469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c6e2e0 with addr=10.0.0.3, port=4420 00:20:34.067 [2024-10-15 08:31:35.692492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6e2e0 is same with the state(6) to be set 00:20:34.067 [2024-10-15 08:31:35.692739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c6e2e0 (9): Bad file descriptor 00:20:34.067 [2024-10-15 08:31:35.693005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:34.067 [2024-10-15 08:31:35.693026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:34.067 [2024-10-15 08:31:35.693038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:34.067 [2024-10-15 08:31:35.696900] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:34.067 [2024-10-15 08:31:35.696949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:34.067 08:31:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:34.326 [2024-10-15 08:31:36.048616] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:34.583 08:31:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82617 00:20:35.099 1595.80 IOPS, 6.23 MiB/s [2024-10-15T08:31:36.830Z] [2024-10-15 08:31:36.739448] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:36.964 2592.33 IOPS, 10.13 MiB/s [2024-10-15T08:31:39.629Z] 3633.29 IOPS, 14.19 MiB/s [2024-10-15T08:31:40.560Z] 4419.12 IOPS, 17.26 MiB/s [2024-10-15T08:31:41.932Z] 5021.44 IOPS, 19.62 MiB/s [2024-10-15T08:31:41.932Z] 5492.10 IOPS, 21.45 MiB/s 00:20:40.201 Latency(us) 00:20:40.201 [2024-10-15T08:31:41.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.201 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:40.201 Verification LBA range: start 0x0 length 0x4000 00:20:40.201 NVMe0n1 : 10.01 5499.09 21.48 3770.41 0.00 13774.74 700.04 3019898.88 00:20:40.201 [2024-10-15T08:31:41.932Z] =================================================================================================================== 00:20:40.201 [2024-10-15T08:31:41.932Z] Total : 5499.09 21.48 3770.41 0.00 13774.74 0.00 3019898.88 00:20:40.201 { 00:20:40.201 "results": [ 00:20:40.201 { 00:20:40.201 "job": "NVMe0n1", 00:20:40.201 "core_mask": "0x4", 00:20:40.201 "workload": "verify", 00:20:40.201 "status": "finished", 00:20:40.201 "verify_range": { 00:20:40.201 "start": 0, 00:20:40.201 "length": 16384 00:20:40.201 }, 00:20:40.201 "queue_depth": 128, 00:20:40.201 "io_size": 4096, 00:20:40.201 "runtime": 10.007658, 00:20:40.201 "iops": 5499.088797798646, 00:20:40.201 "mibps": 21.48081561640096, 00:20:40.201 "io_failed": 37733, 00:20:40.201 "io_timeout": 0, 00:20:40.201 "avg_latency_us": 13774.744988465602, 00:20:40.201 "min_latency_us": 700.0436363636363, 00:20:40.201 "max_latency_us": 3019898.88 00:20:40.201 } 00:20:40.201 ], 00:20:40.201 "core_count": 1 00:20:40.201 } 00:20:40.201 08:31:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82497 00:20:40.201 08:31:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 82497 ']' 00:20:40.201 08:31:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 82497 00:20:40.201 08:31:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:20:40.201 08:31:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:40.201 08:31:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82497 00:20:40.201 08:31:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:40.201 08:31:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:40.201 killing process with pid 82497 00:20:40.201 08:31:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82497' 00:20:40.201 08:31:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 82497 00:20:40.201 Received shutdown signal, test time was about 10.000000 seconds 00:20:40.201 00:20:40.201 Latency(us) 00:20:40.201 [2024-10-15T08:31:41.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.201 [2024-10-15T08:31:41.932Z] =================================================================================================================== 00:20:40.201 [2024-10-15T08:31:41.932Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:40.201 08:31:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 82497 00:20:40.201 08:31:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82732 00:20:40.201 08:31:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:20:40.201 08:31:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82732 /var/tmp/bdevperf.sock 00:20:40.201 08:31:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 82732 ']' 00:20:40.201 08:31:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.201 08:31:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:40.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.201 08:31:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.202 08:31:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:40.202 08:31:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:40.202 [2024-10-15 08:31:41.894419] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:20:40.202 [2024-10-15 08:31:41.894516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82732 ] 00:20:40.459 [2024-10-15 08:31:42.028356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.459 [2024-10-15 08:31:42.099483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.459 [2024-10-15 08:31:42.172171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:40.718 08:31:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:40.718 08:31:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:20:40.718 08:31:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82740 00:20:40.718 08:31:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82732 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:20:40.718 08:31:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:20:40.976 08:31:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:41.233 NVMe0n1 00:20:41.233 08:31:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82782 00:20:41.233 08:31:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:41.233 08:31:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:20:41.234 Running I/O for 10 seconds... 00:20:42.168 08:31:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:42.427 14859.00 IOPS, 58.04 MiB/s [2024-10-15T08:31:44.158Z] [2024-10-15 08:31:44.104435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.427 [2024-10-15 08:31:44.104938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.104946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.104954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.104962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.104969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.104977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.104986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.104994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2250 is same with the state(6) to be set 00:20:42.428 [2024-10-15 08:31:44.105614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.428 [2024-10-15 08:31:44.105648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.428 [2024-10-15 08:31:44.105673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.428 [2024-10-15 08:31:44.105685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.428 [2024-10-15 08:31:44.105698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.428 [2024-10-15 08:31:44.105708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.428 [2024-10-15 08:31:44.105719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.428 [2024-10-15 08:31:44.105728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.428 [2024-10-15 08:31:44.105740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.428 [2024-10-15 08:31:44.105749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.105760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:33736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.105769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.105781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.105790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.105802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.105811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.105822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.105831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.105843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.105852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.105863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.105872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.105883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.105892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.105903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.105913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.105924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.105933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.105944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.105953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.105964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.105973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.105984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.105996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:33528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.429 [2024-10-15 08:31:44.106616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.429 [2024-10-15 08:31:44.106626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.106637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.106646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.106657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.106666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.106677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.106687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.106698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.106707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.106718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.106727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.106737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.106746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.106758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.106767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.106778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.106787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.106798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.106807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.106818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.106827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.106838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.106847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.106858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.106867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.106878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.106886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.106897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.106906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.106917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.106926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.106937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:43016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.106946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.106958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.106967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.106978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.106987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.106998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.107008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.107019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.107028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.107039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.107048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.107060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.107070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.107081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.107090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.107101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.107110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.107131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:52736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.107142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.107153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.107162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.107174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.107183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.107194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.107204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.107215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.107224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.107245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.107254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.107265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.107274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.107286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.107295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.107306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.107315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.107327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.107336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.430 [2024-10-15 08:31:44.107347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.430 [2024-10-15 08:31:44.107357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:47960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.107988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.107997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.108008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.108023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.108034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.108043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.108055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.108064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.431 [2024-10-15 08:31:44.108075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.431 [2024-10-15 08:31:44.108084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.432 [2024-10-15 08:31:44.108095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:111680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.432 [2024-10-15 08:31:44.108103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.432 [2024-10-15 08:31:44.108124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.432 [2024-10-15 08:31:44.108135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.432 [2024-10-15 08:31:44.108146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.432 [2024-10-15 08:31:44.108156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.432 [2024-10-15 08:31:44.108166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.432 [2024-10-15 08:31:44.108175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.432 [2024-10-15 08:31:44.108187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.432 [2024-10-15 08:31:44.108196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.432 [2024-10-15 08:31:44.108207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.432 [2024-10-15 08:31:44.108216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.432 [2024-10-15 08:31:44.108227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.432 [2024-10-15 08:31:44.108236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.432 [2024-10-15 08:31:44.108252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.432 [2024-10-15 08:31:44.108261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.432 [2024-10-15 08:31:44.108273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.432 [2024-10-15 08:31:44.108282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.432 [2024-10-15 08:31:44.108293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.432 [2024-10-15 08:31:44.108302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.432 [2024-10-15 08:31:44.108313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.432 [2024-10-15 08:31:44.108322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.432 [2024-10-15 08:31:44.108332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b570 is same with the state(6) to be set 00:20:42.432 [2024-10-15 08:31:44.108345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:42.432 [2024-10-15 08:31:44.108353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:42.432 [2024-10-15 08:31:44.108367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88128 len:8 PRP1 0x0 PRP2 0x0 00:20:42.432 [2024-10-15 08:31:44.108376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:42.432 [2024-10-15 08:31:44.108441] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x249b570 was disconnected and freed. reset controller. 00:20:42.432 [2024-10-15 08:31:44.108727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:42.432 [2024-10-15 08:31:44.108825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242d2e0 (9): Bad file descriptor 00:20:42.432 [2024-10-15 08:31:44.108950] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:42.432 [2024-10-15 08:31:44.108972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x242d2e0 with addr=10.0.0.3, port=4420 00:20:42.432 [2024-10-15 08:31:44.108984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242d2e0 is same with the state(6) to be set 00:20:42.432 [2024-10-15 08:31:44.109002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242d2e0 (9): Bad file descriptor 00:20:42.432 [2024-10-15 08:31:44.109019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:42.432 [2024-10-15 08:31:44.109029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:42.432 [2024-10-15 08:31:44.109040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:42.432 [2024-10-15 08:31:44.109061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:42.432 [2024-10-15 08:31:44.109074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:42.432 08:31:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82782 00:20:44.345 8446.50 IOPS, 32.99 MiB/s [2024-10-15T08:31:46.334Z] 5631.00 IOPS, 22.00 MiB/s [2024-10-15T08:31:46.334Z] [2024-10-15 08:31:46.109452] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:44.603 [2024-10-15 08:31:46.109555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x242d2e0 with addr=10.0.0.3, port=4420 00:20:44.603 [2024-10-15 08:31:46.109573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242d2e0 is same with the state(6) to be set 00:20:44.603 [2024-10-15 08:31:46.109612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242d2e0 (9): Bad file descriptor 00:20:44.603 [2024-10-15 08:31:46.109632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:44.603 [2024-10-15 08:31:46.109643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:44.603 [2024-10-15 08:31:46.109655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:44.603 [2024-10-15 08:31:46.109702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:44.603 [2024-10-15 08:31:46.109716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:46.469 4223.25 IOPS, 16.50 MiB/s [2024-10-15T08:31:48.200Z] 3378.60 IOPS, 13.20 MiB/s [2024-10-15T08:31:48.200Z] [2024-10-15 08:31:48.109953] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.469 [2024-10-15 08:31:48.110063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x242d2e0 with addr=10.0.0.3, port=4420 00:20:46.469 [2024-10-15 08:31:48.110081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242d2e0 is same with the state(6) to be set 00:20:46.469 [2024-10-15 08:31:48.110111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242d2e0 (9): Bad file descriptor 00:20:46.469 [2024-10-15 08:31:48.110162] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:46.469 [2024-10-15 08:31:48.110173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:46.469 [2024-10-15 08:31:48.110185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:46.469 [2024-10-15 08:31:48.110226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:46.469 [2024-10-15 08:31:48.110243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:48.382 2815.50 IOPS, 11.00 MiB/s [2024-10-15T08:31:50.113Z] 2413.29 IOPS, 9.43 MiB/s [2024-10-15T08:31:50.113Z] [2024-10-15 08:31:50.110398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:48.382 [2024-10-15 08:31:50.110465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:48.382 [2024-10-15 08:31:50.110477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:48.382 [2024-10-15 08:31:50.110489] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:48.382 [2024-10-15 08:31:50.110537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:49.579 2111.62 IOPS, 8.25 MiB/s 00:20:49.579 Latency(us) 00:20:49.579 [2024-10-15T08:31:51.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.579 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:49.579 NVMe0n1 : 8.15 2073.54 8.10 15.71 0.00 61158.40 7983.48 7015926.69 00:20:49.579 [2024-10-15T08:31:51.310Z] =================================================================================================================== 00:20:49.579 [2024-10-15T08:31:51.310Z] Total : 2073.54 8.10 15.71 0.00 61158.40 7983.48 7015926.69 00:20:49.579 { 00:20:49.579 "results": [ 00:20:49.579 { 00:20:49.579 "job": "NVMe0n1", 00:20:49.579 "core_mask": "0x4", 00:20:49.579 "workload": "randread", 00:20:49.579 "status": "finished", 00:20:49.579 "queue_depth": 128, 00:20:49.579 "io_size": 4096, 00:20:49.579 "runtime": 8.146923, 00:20:49.579 "iops": 2073.5435943116195, 00:20:49.579 "mibps": 8.099779665279764, 00:20:49.579 "io_failed": 128, 00:20:49.579 "io_timeout": 0, 00:20:49.579 "avg_latency_us": 61158.40024611309, 00:20:49.579 "min_latency_us": 7983.476363636363, 00:20:49.579 "max_latency_us": 7015926.69090909 00:20:49.579 } 00:20:49.579 ], 00:20:49.579 "core_count": 1 00:20:49.579 } 00:20:49.579 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:49.579 Attaching 5 probes... 00:20:49.579 1330.792374: reset bdev controller NVMe0 00:20:49.579 1330.946633: reconnect bdev controller NVMe0 00:20:49.579 3331.380475: reconnect delay bdev controller NVMe0 00:20:49.579 3331.404508: reconnect bdev controller NVMe0 00:20:49.579 5331.895303: reconnect delay bdev controller NVMe0 00:20:49.579 5331.917916: reconnect bdev controller NVMe0 00:20:49.579 7332.447498: reconnect delay bdev controller NVMe0 00:20:49.579 7332.470306: reconnect bdev controller NVMe0 00:20:49.579 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:49.579 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:49.579 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82740 00:20:49.579 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:49.579 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82732 00:20:49.579 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 82732 ']' 00:20:49.579 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 82732 00:20:49.579 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:20:49.579 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:49.579 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82732 00:20:49.579 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:49.579 killing process with pid 82732 00:20:49.579 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:49.579 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82732' 00:20:49.579 Received shutdown signal, test time was about 8.216927 seconds 00:20:49.579 00:20:49.579 Latency(us) 00:20:49.579 [2024-10-15T08:31:51.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.579 [2024-10-15T08:31:51.310Z] =================================================================================================================== 00:20:49.579 [2024-10-15T08:31:51.310Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:49.579 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 82732 00:20:49.579 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 82732 00:20:49.837 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:50.094 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:50.094 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:50.094 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:50.094 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:20:50.094 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:50.094 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:20:50.094 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:50.094 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:50.094 rmmod nvme_tcp 00:20:50.094 rmmod nvme_fabrics 00:20:50.094 rmmod nvme_keyring 00:20:50.094 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:50.095 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:20:50.095 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:20:50.095 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@515 -- # '[' -n 82300 ']' 00:20:50.095 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # killprocess 82300 00:20:50.095 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 82300 ']' 00:20:50.095 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 82300 00:20:50.095 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:20:50.095 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:50.095 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82300 00:20:50.353 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:50.353 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:50.353 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82300' 00:20:50.353 killing process with pid 82300 00:20:50.353 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 82300 00:20:50.353 08:31:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 82300 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@789 -- # iptables-save 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@789 -- # iptables-restore 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.611 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.869 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:20:50.869 00:20:50.869 real 0m47.030s 00:20:50.869 user 2m17.442s 00:20:50.869 sys 0m5.977s 00:20:50.869 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:50.869 08:31:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:50.869 ************************************ 00:20:50.869 END TEST nvmf_timeout 00:20:50.869 ************************************ 00:20:50.869 08:31:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:20:50.869 08:31:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:50.869 00:20:50.869 real 5m12.326s 00:20:50.869 user 13m29.760s 00:20:50.869 sys 1m12.160s 00:20:50.869 08:31:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:50.869 08:31:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.869 ************************************ 00:20:50.869 END TEST nvmf_host 00:20:50.869 ************************************ 00:20:50.869 08:31:52 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:20:50.869 08:31:52 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:20:50.869 00:20:50.869 real 13m8.351s 00:20:50.869 user 31m28.451s 00:20:50.869 sys 3m18.712s 00:20:50.869 08:31:52 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:50.869 08:31:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:50.869 ************************************ 00:20:50.869 END TEST nvmf_tcp 00:20:50.869 ************************************ 00:20:50.869 08:31:52 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:20:50.869 08:31:52 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:50.870 08:31:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:50.870 08:31:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:50.870 08:31:52 -- common/autotest_common.sh@10 -- # set +x 00:20:50.870 ************************************ 00:20:50.870 START TEST nvmf_dif 00:20:50.870 ************************************ 00:20:50.870 08:31:52 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:50.870 * Looking for test storage... 00:20:50.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:50.870 08:31:52 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:51.129 08:31:52 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:20:51.129 08:31:52 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:51.129 08:31:52 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:20:51.129 08:31:52 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:51.129 08:31:52 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:51.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.129 --rc genhtml_branch_coverage=1 00:20:51.129 --rc genhtml_function_coverage=1 00:20:51.129 --rc genhtml_legend=1 00:20:51.129 --rc geninfo_all_blocks=1 00:20:51.129 --rc geninfo_unexecuted_blocks=1 00:20:51.129 00:20:51.129 ' 00:20:51.129 08:31:52 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:51.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.129 --rc genhtml_branch_coverage=1 00:20:51.129 --rc genhtml_function_coverage=1 00:20:51.129 --rc genhtml_legend=1 00:20:51.129 --rc geninfo_all_blocks=1 00:20:51.129 --rc geninfo_unexecuted_blocks=1 00:20:51.129 00:20:51.129 ' 00:20:51.129 08:31:52 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:51.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.129 --rc genhtml_branch_coverage=1 00:20:51.129 --rc genhtml_function_coverage=1 00:20:51.129 --rc genhtml_legend=1 00:20:51.129 --rc geninfo_all_blocks=1 00:20:51.129 --rc geninfo_unexecuted_blocks=1 00:20:51.129 00:20:51.129 ' 00:20:51.129 08:31:52 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:51.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.129 --rc genhtml_branch_coverage=1 00:20:51.129 --rc genhtml_function_coverage=1 00:20:51.129 --rc genhtml_legend=1 00:20:51.129 --rc geninfo_all_blocks=1 00:20:51.129 --rc geninfo_unexecuted_blocks=1 00:20:51.129 00:20:51.129 ' 00:20:51.129 08:31:52 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:51.129 08:31:52 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:51.129 08:31:52 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.129 08:31:52 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.129 08:31:52 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.129 08:31:52 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.129 08:31:52 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.129 08:31:52 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.129 08:31:52 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.129 08:31:52 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.129 08:31:52 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.129 08:31:52 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.129 08:31:52 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:20:51.129 08:31:52 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:20:51.129 08:31:52 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.129 08:31:52 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.129 08:31:52 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:51.129 08:31:52 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.129 08:31:52 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:20:51.129 08:31:52 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.130 08:31:52 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.130 08:31:52 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.130 08:31:52 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.130 08:31:52 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.130 08:31:52 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.130 08:31:52 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:51.130 08:31:52 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:51.130 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:51.130 08:31:52 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:51.130 08:31:52 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:51.130 08:31:52 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:51.130 08:31:52 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:51.130 08:31:52 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.130 08:31:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:51.130 08:31:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@458 -- # nvmf_veth_init 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:51.130 Cannot find device "nvmf_init_br" 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:51.130 Cannot find device "nvmf_init_br2" 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:51.130 Cannot find device "nvmf_tgt_br" 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@164 -- # true 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:51.130 Cannot find device "nvmf_tgt_br2" 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@165 -- # true 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:51.130 Cannot find device "nvmf_init_br" 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@166 -- # true 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:51.130 Cannot find device "nvmf_init_br2" 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@167 -- # true 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:51.130 Cannot find device "nvmf_tgt_br" 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@168 -- # true 00:20:51.130 08:31:52 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:51.391 Cannot find device "nvmf_tgt_br2" 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@169 -- # true 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:51.391 Cannot find device "nvmf_br" 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@170 -- # true 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:51.391 Cannot find device "nvmf_init_if" 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@171 -- # true 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:51.391 Cannot find device "nvmf_init_if2" 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@172 -- # true 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:51.391 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@173 -- # true 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:51.391 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@174 -- # true 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:51.391 08:31:52 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:51.391 08:31:53 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:51.391 08:31:53 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:51.391 08:31:53 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:51.391 08:31:53 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:51.391 08:31:53 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:51.391 08:31:53 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:51.391 08:31:53 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:51.391 08:31:53 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:51.391 08:31:53 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:51.391 08:31:53 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:51.391 08:31:53 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:51.391 08:31:53 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:51.391 08:31:53 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:51.391 08:31:53 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:51.391 08:31:53 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:51.391 08:31:53 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:51.391 08:31:53 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:51.391 08:31:53 nvmf_dif -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:51.391 08:31:53 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:51.391 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:51.391 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:20:51.391 00:20:51.391 --- 10.0.0.3 ping statistics --- 00:20:51.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.391 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:20:51.391 08:31:53 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:51.391 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:51.392 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:20:51.392 00:20:51.392 --- 10.0.0.4 ping statistics --- 00:20:51.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.392 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:20:51.392 08:31:53 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:51.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:51.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:20:51.392 00:20:51.392 --- 10.0.0.1 ping statistics --- 00:20:51.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.392 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:20:51.392 08:31:53 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:51.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:51.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:20:51.650 00:20:51.650 --- 10.0.0.2 ping statistics --- 00:20:51.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.650 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:51.650 08:31:53 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:51.650 08:31:53 nvmf_dif -- nvmf/common.sh@459 -- # return 0 00:20:51.650 08:31:53 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:20:51.650 08:31:53 nvmf_dif -- nvmf/common.sh@477 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:51.908 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:51.908 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:51.908 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:51.908 08:31:53 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:51.908 08:31:53 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:51.908 08:31:53 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:51.908 08:31:53 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:51.908 08:31:53 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:51.908 08:31:53 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:51.908 08:31:53 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:51.908 08:31:53 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:51.908 08:31:53 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:51.908 08:31:53 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:51.908 08:31:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:51.908 08:31:53 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=83274 00:20:51.908 08:31:53 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:51.908 08:31:53 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 83274 00:20:51.908 08:31:53 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 83274 ']' 00:20:51.908 08:31:53 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.908 08:31:53 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:51.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.908 08:31:53 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.908 08:31:53 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:51.908 08:31:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:51.908 [2024-10-15 08:31:53.598329] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:20:51.908 [2024-10-15 08:31:53.598450] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.166 [2024-10-15 08:31:53.741734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.167 [2024-10-15 08:31:53.825946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.167 [2024-10-15 08:31:53.826012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.167 [2024-10-15 08:31:53.826027] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.167 [2024-10-15 08:31:53.826039] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.167 [2024-10-15 08:31:53.826049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.167 [2024-10-15 08:31:53.826652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.425 [2024-10-15 08:31:53.904282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:52.990 08:31:54 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:52.990 08:31:54 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:20:52.990 08:31:54 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:52.990 08:31:54 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:52.990 08:31:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:52.990 08:31:54 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.990 08:31:54 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:52.990 08:31:54 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:52.990 08:31:54 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.990 08:31:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:52.991 [2024-10-15 08:31:54.634581] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.991 08:31:54 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.991 08:31:54 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:52.991 08:31:54 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:52.991 08:31:54 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:52.991 08:31:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:52.991 ************************************ 00:20:52.991 START TEST fio_dif_1_default 00:20:52.991 ************************************ 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:52.991 bdev_null0 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:52.991 [2024-10-15 08:31:54.679356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:52.991 { 00:20:52.991 "params": { 00:20:52.991 "name": "Nvme$subsystem", 00:20:52.991 "trtype": "$TEST_TRANSPORT", 00:20:52.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.991 "adrfam": "ipv4", 00:20:52.991 "trsvcid": "$NVMF_PORT", 00:20:52.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.991 "hdgst": ${hdgst:-false}, 00:20:52.991 "ddgst": ${ddgst:-false} 00:20:52.991 }, 00:20:52.991 "method": "bdev_nvme_attach_controller" 00:20:52.991 } 00:20:52.991 EOF 00:20:52.991 )") 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:52.991 "params": { 00:20:52.991 "name": "Nvme0", 00:20:52.991 "trtype": "tcp", 00:20:52.991 "traddr": "10.0.0.3", 00:20:52.991 "adrfam": "ipv4", 00:20:52.991 "trsvcid": "4420", 00:20:52.991 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:52.991 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:52.991 "hdgst": false, 00:20:52.991 "ddgst": false 00:20:52.991 }, 00:20:52.991 "method": "bdev_nvme_attach_controller" 00:20:52.991 }' 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:52.991 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:53.274 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:53.274 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:53.274 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:53.274 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:53.274 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:53.274 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:53.274 08:31:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:53.274 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:53.274 fio-3.35 00:20:53.274 Starting 1 thread 00:21:05.568 00:21:05.568 filename0: (groupid=0, jobs=1): err= 0: pid=83341: Tue Oct 15 08:32:05 2024 00:21:05.568 read: IOPS=8697, BW=34.0MiB/s (35.6MB/s)(340MiB/10001msec) 00:21:05.568 slat (nsec): min=6163, max=83997, avg=8849.23, stdev=3675.36 00:21:05.568 clat (usec): min=328, max=4607, avg=433.55, stdev=50.56 00:21:05.568 lat (usec): min=334, max=4642, avg=442.40, stdev=51.20 00:21:05.568 clat percentiles (usec): 00:21:05.568 | 1.00th=[ 363], 5.00th=[ 375], 10.00th=[ 383], 20.00th=[ 400], 00:21:05.568 | 30.00th=[ 412], 40.00th=[ 420], 50.00th=[ 433], 60.00th=[ 441], 00:21:05.568 | 70.00th=[ 453], 80.00th=[ 465], 90.00th=[ 486], 95.00th=[ 502], 00:21:05.568 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 603], 99.95th=[ 635], 00:21:05.568 | 99.99th=[ 1500] 00:21:05.568 bw ( KiB/s): min=33344, max=35936, per=100.00%, avg=34839.16, stdev=606.98, samples=19 00:21:05.568 iops : min= 8336, max= 8984, avg=8709.58, stdev=151.67, samples=19 00:21:05.568 lat (usec) : 500=94.67%, 750=5.31%, 1000=0.01% 00:21:05.568 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:21:05.568 cpu : usr=84.50%, sys=13.38%, ctx=18, majf=0, minf=0 00:21:05.568 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:05.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.568 issued rwts: total=86988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.568 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:05.568 00:21:05.568 Run status group 0 (all jobs): 00:21:05.568 READ: bw=34.0MiB/s (35.6MB/s), 34.0MiB/s-34.0MiB/s (35.6MB/s-35.6MB/s), io=340MiB (356MB), run=10001-10001msec 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:05.568 ************************************ 00:21:05.568 END TEST fio_dif_1_default 00:21:05.568 ************************************ 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.568 00:21:05.568 real 0m11.173s 00:21:05.568 user 0m9.215s 00:21:05.568 sys 0m1.667s 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:05.568 08:32:05 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:21:05.568 08:32:05 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:05.568 08:32:05 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:05.568 08:32:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:05.568 ************************************ 00:21:05.568 START TEST fio_dif_1_multi_subsystems 00:21:05.568 ************************************ 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:05.568 bdev_null0 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:05.568 [2024-10-15 08:32:05.907869] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:05.568 bdev_null1 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:05.568 { 00:21:05.568 "params": { 00:21:05.568 "name": "Nvme$subsystem", 00:21:05.568 "trtype": "$TEST_TRANSPORT", 00:21:05.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.568 "adrfam": "ipv4", 00:21:05.568 "trsvcid": "$NVMF_PORT", 00:21:05.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.568 "hdgst": ${hdgst:-false}, 00:21:05.568 "ddgst": ${ddgst:-false} 00:21:05.568 }, 00:21:05.568 "method": "bdev_nvme_attach_controller" 00:21:05.568 } 00:21:05.568 EOF 00:21:05.568 )") 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:21:05.568 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:05.569 { 00:21:05.569 "params": { 00:21:05.569 "name": "Nvme$subsystem", 00:21:05.569 "trtype": "$TEST_TRANSPORT", 00:21:05.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.569 "adrfam": "ipv4", 00:21:05.569 "trsvcid": "$NVMF_PORT", 00:21:05.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.569 "hdgst": ${hdgst:-false}, 00:21:05.569 "ddgst": ${ddgst:-false} 00:21:05.569 }, 00:21:05.569 "method": "bdev_nvme_attach_controller" 00:21:05.569 } 00:21:05.569 EOF 00:21:05.569 )") 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:05.569 "params": { 00:21:05.569 "name": "Nvme0", 00:21:05.569 "trtype": "tcp", 00:21:05.569 "traddr": "10.0.0.3", 00:21:05.569 "adrfam": "ipv4", 00:21:05.569 "trsvcid": "4420", 00:21:05.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:05.569 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:05.569 "hdgst": false, 00:21:05.569 "ddgst": false 00:21:05.569 }, 00:21:05.569 "method": "bdev_nvme_attach_controller" 00:21:05.569 },{ 00:21:05.569 "params": { 00:21:05.569 "name": "Nvme1", 00:21:05.569 "trtype": "tcp", 00:21:05.569 "traddr": "10.0.0.3", 00:21:05.569 "adrfam": "ipv4", 00:21:05.569 "trsvcid": "4420", 00:21:05.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.569 "hdgst": false, 00:21:05.569 "ddgst": false 00:21:05.569 }, 00:21:05.569 "method": "bdev_nvme_attach_controller" 00:21:05.569 }' 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:05.569 08:32:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:05.569 08:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:05.569 08:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:05.569 08:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:05.569 08:32:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:05.569 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:05.569 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:05.569 fio-3.35 00:21:05.569 Starting 2 threads 00:21:15.543 00:21:15.544 filename0: (groupid=0, jobs=1): err= 0: pid=83506: Tue Oct 15 08:32:16 2024 00:21:15.544 read: IOPS=4836, BW=18.9MiB/s (19.8MB/s)(189MiB/10001msec) 00:21:15.544 slat (nsec): min=6416, max=69237, avg=14096.57, stdev=4923.22 00:21:15.544 clat (usec): min=382, max=3717, avg=787.96, stdev=56.87 00:21:15.544 lat (usec): min=389, max=3741, avg=802.06, stdev=57.61 00:21:15.544 clat percentiles (usec): 00:21:15.544 | 1.00th=[ 685], 5.00th=[ 701], 10.00th=[ 717], 20.00th=[ 742], 00:21:15.544 | 30.00th=[ 758], 40.00th=[ 775], 50.00th=[ 791], 60.00th=[ 807], 00:21:15.544 | 70.00th=[ 816], 80.00th=[ 832], 90.00th=[ 848], 95.00th=[ 865], 00:21:15.544 | 99.00th=[ 898], 99.50th=[ 922], 99.90th=[ 955], 99.95th=[ 979], 00:21:15.544 | 99.99th=[ 1020] 00:21:15.544 bw ( KiB/s): min=18464, max=20544, per=50.03%, avg=19357.00, stdev=627.21, samples=19 00:21:15.544 iops : min= 4616, max= 5136, avg=4839.21, stdev=156.78, samples=19 00:21:15.544 lat (usec) : 500=0.02%, 750=24.38%, 1000=75.59% 00:21:15.544 lat (msec) : 2=0.01%, 4=0.01% 00:21:15.544 cpu : usr=89.81%, sys=8.69%, ctx=16, majf=0, minf=0 00:21:15.544 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:15.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.544 issued rwts: total=48372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.544 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:15.544 filename1: (groupid=0, jobs=1): err= 0: pid=83507: Tue Oct 15 08:32:16 2024 00:21:15.544 read: IOPS=4835, BW=18.9MiB/s (19.8MB/s)(189MiB/10001msec) 00:21:15.544 slat (nsec): min=5057, max=77076, avg=14038.94, stdev=4776.02 00:21:15.544 clat (usec): min=575, max=4577, avg=788.94, stdev=67.79 00:21:15.544 lat (usec): min=592, max=4615, avg=802.97, stdev=68.86 00:21:15.544 clat percentiles (usec): 00:21:15.544 | 1.00th=[ 644], 5.00th=[ 685], 10.00th=[ 709], 20.00th=[ 742], 00:21:15.544 | 30.00th=[ 758], 40.00th=[ 775], 50.00th=[ 791], 60.00th=[ 807], 00:21:15.544 | 70.00th=[ 824], 80.00th=[ 840], 90.00th=[ 857], 95.00th=[ 881], 00:21:15.544 | 99.00th=[ 914], 99.50th=[ 930], 99.90th=[ 971], 99.95th=[ 996], 00:21:15.544 | 99.99th=[ 1057] 00:21:15.544 bw ( KiB/s): min=18464, max=20544, per=50.02%, avg=19354.95, stdev=625.91, samples=19 00:21:15.544 iops : min= 4616, max= 5136, avg=4838.74, stdev=156.48, samples=19 00:21:15.544 lat (usec) : 750=24.92%, 1000=75.04% 00:21:15.544 lat (msec) : 2=0.03%, 10=0.01% 00:21:15.544 cpu : usr=89.30%, sys=9.21%, ctx=21, majf=0, minf=0 00:21:15.544 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:15.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.544 issued rwts: total=48364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.544 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:15.544 00:21:15.544 Run status group 0 (all jobs): 00:21:15.544 READ: bw=37.8MiB/s (39.6MB/s), 18.9MiB/s-18.9MiB/s (19.8MB/s-19.8MB/s), io=378MiB (396MB), run=10001-10001msec 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:15.544 ************************************ 00:21:15.544 END TEST fio_dif_1_multi_subsystems 00:21:15.544 ************************************ 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.544 00:21:15.544 real 0m11.243s 00:21:15.544 user 0m18.731s 00:21:15.544 sys 0m2.126s 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:15.544 08:32:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:15.544 08:32:17 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:21:15.544 08:32:17 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:15.544 08:32:17 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:15.544 08:32:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:15.544 ************************************ 00:21:15.544 START TEST fio_dif_rand_params 00:21:15.544 ************************************ 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:15.544 bdev_null0 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:15.544 [2024-10-15 08:32:17.202049] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:15.544 { 00:21:15.544 "params": { 00:21:15.544 "name": "Nvme$subsystem", 00:21:15.544 "trtype": "$TEST_TRANSPORT", 00:21:15.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.544 "adrfam": "ipv4", 00:21:15.544 "trsvcid": "$NVMF_PORT", 00:21:15.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.544 "hdgst": ${hdgst:-false}, 00:21:15.544 "ddgst": ${ddgst:-false} 00:21:15.544 }, 00:21:15.544 "method": "bdev_nvme_attach_controller" 00:21:15.544 } 00:21:15.544 EOF 00:21:15.544 )") 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:15.544 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:15.545 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:21:15.545 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:15.545 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:15.545 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:15.545 08:32:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:15.545 08:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:21:15.545 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:15.545 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:21:15.545 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:15.545 08:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:21:15.545 08:32:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:15.545 "params": { 00:21:15.545 "name": "Nvme0", 00:21:15.545 "trtype": "tcp", 00:21:15.545 "traddr": "10.0.0.3", 00:21:15.545 "adrfam": "ipv4", 00:21:15.545 "trsvcid": "4420", 00:21:15.545 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:15.545 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:15.545 "hdgst": false, 00:21:15.545 "ddgst": false 00:21:15.545 }, 00:21:15.545 "method": "bdev_nvme_attach_controller" 00:21:15.545 }' 00:21:15.545 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:15.545 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:15.545 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:15.545 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:15.545 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:15.545 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:15.545 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:15.545 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:15.545 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:15.545 08:32:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:15.803 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:15.803 ... 00:21:15.803 fio-3.35 00:21:15.803 Starting 3 threads 00:21:22.368 00:21:22.368 filename0: (groupid=0, jobs=1): err= 0: pid=83664: Tue Oct 15 08:32:23 2024 00:21:22.368 read: IOPS=269, BW=33.7MiB/s (35.3MB/s)(168MiB/5003msec) 00:21:22.368 slat (nsec): min=6900, max=54707, avg=15437.10, stdev=4719.45 00:21:22.368 clat (usec): min=7742, max=13369, avg=11108.46, stdev=473.59 00:21:22.368 lat (usec): min=7756, max=13384, avg=11123.89, stdev=474.13 00:21:22.368 clat percentiles (usec): 00:21:22.368 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10552], 20.00th=[10683], 00:21:22.368 | 30.00th=[10814], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:21:22.368 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11731], 95.00th=[11863], 00:21:22.368 | 99.00th=[12125], 99.50th=[12256], 99.90th=[13304], 99.95th=[13435], 00:21:22.368 | 99.99th=[13435] 00:21:22.368 bw ( KiB/s): min=32256, max=35328, per=33.26%, avg=34389.33, stdev=923.02, samples=9 00:21:22.369 iops : min= 252, max= 276, avg=268.67, stdev= 7.21, samples=9 00:21:22.369 lat (msec) : 10=0.22%, 20=99.78% 00:21:22.369 cpu : usr=91.26%, sys=8.20%, ctx=7, majf=0, minf=0 00:21:22.369 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:22.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.369 issued rwts: total=1347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.369 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:22.369 filename0: (groupid=0, jobs=1): err= 0: pid=83665: Tue Oct 15 08:32:23 2024 00:21:22.369 read: IOPS=269, BW=33.7MiB/s (35.3MB/s)(168MiB/5001msec) 00:21:22.369 slat (usec): min=6, max=1241, avg=15.17, stdev=33.83 00:21:22.369 clat (usec): min=5863, max=13371, avg=11103.12, stdev=507.98 00:21:22.369 lat (usec): min=5870, max=13384, avg=11118.29, stdev=508.27 00:21:22.369 clat percentiles (usec): 00:21:22.369 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10552], 20.00th=[10683], 00:21:22.369 | 30.00th=[10814], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:21:22.369 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11731], 95.00th=[11863], 00:21:22.369 | 99.00th=[12125], 99.50th=[12125], 99.90th=[13304], 99.95th=[13435], 00:21:22.369 | 99.99th=[13435] 00:21:22.369 bw ( KiB/s): min=33024, max=35328, per=33.34%, avg=34466.78, stdev=887.82, samples=9 00:21:22.369 iops : min= 258, max= 276, avg=269.22, stdev= 6.89, samples=9 00:21:22.369 lat (msec) : 10=0.22%, 20=99.78% 00:21:22.369 cpu : usr=89.66%, sys=9.54%, ctx=53, majf=0, minf=0 00:21:22.369 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:22.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.369 issued rwts: total=1347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.369 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:22.369 filename0: (groupid=0, jobs=1): err= 0: pid=83666: Tue Oct 15 08:32:23 2024 00:21:22.369 read: IOPS=269, BW=33.7MiB/s (35.3MB/s)(168MiB/5003msec) 00:21:22.369 slat (nsec): min=6809, max=54714, avg=15701.73, stdev=4773.40 00:21:22.369 clat (usec): min=7702, max=13349, avg=11106.82, stdev=472.27 00:21:22.369 lat (usec): min=7715, max=13365, avg=11122.52, stdev=472.96 00:21:22.369 clat percentiles (usec): 00:21:22.369 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10552], 20.00th=[10683], 00:21:22.369 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:21:22.369 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11731], 95.00th=[11863], 00:21:22.369 | 99.00th=[12125], 99.50th=[12125], 99.90th=[13304], 99.95th=[13304], 00:21:22.369 | 99.99th=[13304] 00:21:22.369 bw ( KiB/s): min=32256, max=35328, per=33.26%, avg=34389.33, stdev=923.02, samples=9 00:21:22.369 iops : min= 252, max= 276, avg=268.67, stdev= 7.21, samples=9 00:21:22.369 lat (msec) : 10=0.22%, 20=99.78% 00:21:22.369 cpu : usr=90.56%, sys=8.88%, ctx=4, majf=0, minf=0 00:21:22.369 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:22.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.369 issued rwts: total=1347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.369 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:22.369 00:21:22.369 Run status group 0 (all jobs): 00:21:22.369 READ: bw=101MiB/s (106MB/s), 33.7MiB/s-33.7MiB/s (35.3MB/s-35.3MB/s), io=505MiB (530MB), run=5001-5003msec 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:22.369 bdev_null0 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:22.369 [2024-10-15 08:32:23.357542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:22.369 bdev_null1 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:22.369 bdev_null2 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:22.369 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:22.370 { 00:21:22.370 "params": { 00:21:22.370 "name": "Nvme$subsystem", 00:21:22.370 "trtype": "$TEST_TRANSPORT", 00:21:22.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.370 "adrfam": "ipv4", 00:21:22.370 "trsvcid": "$NVMF_PORT", 00:21:22.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.370 "hdgst": ${hdgst:-false}, 00:21:22.370 "ddgst": ${ddgst:-false} 00:21:22.370 }, 00:21:22.370 "method": "bdev_nvme_attach_controller" 00:21:22.370 } 00:21:22.370 EOF 00:21:22.370 )") 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:22.370 { 00:21:22.370 "params": { 00:21:22.370 "name": "Nvme$subsystem", 00:21:22.370 "trtype": "$TEST_TRANSPORT", 00:21:22.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.370 "adrfam": "ipv4", 00:21:22.370 "trsvcid": "$NVMF_PORT", 00:21:22.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.370 "hdgst": ${hdgst:-false}, 00:21:22.370 "ddgst": ${ddgst:-false} 00:21:22.370 }, 00:21:22.370 "method": "bdev_nvme_attach_controller" 00:21:22.370 } 00:21:22.370 EOF 00:21:22.370 )") 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:22.370 { 00:21:22.370 "params": { 00:21:22.370 "name": "Nvme$subsystem", 00:21:22.370 "trtype": "$TEST_TRANSPORT", 00:21:22.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.370 "adrfam": "ipv4", 00:21:22.370 "trsvcid": "$NVMF_PORT", 00:21:22.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.370 "hdgst": ${hdgst:-false}, 00:21:22.370 "ddgst": ${ddgst:-false} 00:21:22.370 }, 00:21:22.370 "method": "bdev_nvme_attach_controller" 00:21:22.370 } 00:21:22.370 EOF 00:21:22.370 )") 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:22.370 "params": { 00:21:22.370 "name": "Nvme0", 00:21:22.370 "trtype": "tcp", 00:21:22.370 "traddr": "10.0.0.3", 00:21:22.370 "adrfam": "ipv4", 00:21:22.370 "trsvcid": "4420", 00:21:22.370 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:22.370 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:22.370 "hdgst": false, 00:21:22.370 "ddgst": false 00:21:22.370 }, 00:21:22.370 "method": "bdev_nvme_attach_controller" 00:21:22.370 },{ 00:21:22.370 "params": { 00:21:22.370 "name": "Nvme1", 00:21:22.370 "trtype": "tcp", 00:21:22.370 "traddr": "10.0.0.3", 00:21:22.370 "adrfam": "ipv4", 00:21:22.370 "trsvcid": "4420", 00:21:22.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:22.370 "hdgst": false, 00:21:22.370 "ddgst": false 00:21:22.370 }, 00:21:22.370 "method": "bdev_nvme_attach_controller" 00:21:22.370 },{ 00:21:22.370 "params": { 00:21:22.370 "name": "Nvme2", 00:21:22.370 "trtype": "tcp", 00:21:22.370 "traddr": "10.0.0.3", 00:21:22.370 "adrfam": "ipv4", 00:21:22.370 "trsvcid": "4420", 00:21:22.370 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:22.370 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:22.370 "hdgst": false, 00:21:22.370 "ddgst": false 00:21:22.370 }, 00:21:22.370 "method": "bdev_nvme_attach_controller" 00:21:22.370 }' 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:22.370 08:32:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:22.370 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:22.370 ... 00:21:22.370 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:22.370 ... 00:21:22.370 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:22.370 ... 00:21:22.370 fio-3.35 00:21:22.370 Starting 24 threads 00:21:34.584 00:21:34.584 filename0: (groupid=0, jobs=1): err= 0: pid=83763: Tue Oct 15 08:32:34 2024 00:21:34.585 read: IOPS=250, BW=1000KiB/s (1024kB/s)(9.83MiB/10063msec) 00:21:34.585 slat (usec): min=7, max=10030, avg=27.57, stdev=341.23 00:21:34.585 clat (msec): min=11, max=142, avg=63.81, stdev=19.97 00:21:34.585 lat (msec): min=11, max=142, avg=63.84, stdev=19.97 00:21:34.585 clat percentiles (msec): 00:21:34.585 | 1.00th=[ 15], 5.00th=[ 24], 10.00th=[ 35], 20.00th=[ 48], 00:21:34.585 | 30.00th=[ 53], 40.00th=[ 62], 50.00th=[ 71], 60.00th=[ 72], 00:21:34.585 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 85], 95.00th=[ 93], 00:21:34.585 | 99.00th=[ 107], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 128], 00:21:34.585 | 99.99th=[ 144] 00:21:34.585 bw ( KiB/s): min= 755, max= 1568, per=4.06%, avg=1000.35, stdev=214.52, samples=20 00:21:34.585 iops : min= 188, max= 392, avg=250.05, stdev=53.68, samples=20 00:21:34.585 lat (msec) : 20=1.83%, 50=25.16%, 100=70.95%, 250=2.07% 00:21:34.585 cpu : usr=33.07%, sys=1.85%, ctx=968, majf=0, minf=9 00:21:34.585 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=79.9%, 16=16.5%, 32=0.0%, >=64=0.0% 00:21:34.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.585 complete : 0=0.0%, 4=88.4%, 8=11.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.585 issued rwts: total=2516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.585 filename0: (groupid=0, jobs=1): err= 0: pid=83764: Tue Oct 15 08:32:34 2024 00:21:34.585 read: IOPS=254, BW=1018KiB/s (1043kB/s)(9.99MiB/10044msec) 00:21:34.585 slat (usec): min=5, max=8035, avg=28.64, stdev=280.95 00:21:34.585 clat (msec): min=13, max=127, avg=62.62, stdev=19.45 00:21:34.585 lat (msec): min=13, max=127, avg=62.64, stdev=19.45 00:21:34.585 clat percentiles (msec): 00:21:34.585 | 1.00th=[ 20], 5.00th=[ 27], 10.00th=[ 34], 20.00th=[ 47], 00:21:34.585 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 68], 60.00th=[ 72], 00:21:34.585 | 70.00th=[ 74], 80.00th=[ 80], 90.00th=[ 84], 95.00th=[ 91], 00:21:34.585 | 99.00th=[ 106], 99.50th=[ 110], 99.90th=[ 118], 99.95th=[ 121], 00:21:34.585 | 99.99th=[ 128] 00:21:34.585 bw ( KiB/s): min= 792, max= 1864, per=4.13%, avg=1018.70, stdev=257.19, samples=20 00:21:34.585 iops : min= 198, max= 466, avg=254.65, stdev=64.32, samples=20 00:21:34.585 lat (msec) : 20=1.33%, 50=26.75%, 100=70.20%, 250=1.72% 00:21:34.585 cpu : usr=40.45%, sys=2.49%, ctx=1357, majf=0, minf=9 00:21:34.585 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.3%, 16=16.7%, 32=0.0%, >=64=0.0% 00:21:34.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.585 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.585 issued rwts: total=2557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.585 filename0: (groupid=0, jobs=1): err= 0: pid=83765: Tue Oct 15 08:32:34 2024 00:21:34.585 read: IOPS=249, BW=997KiB/s (1021kB/s)(9.79MiB/10051msec) 00:21:34.585 slat (usec): min=6, max=8039, avg=29.04, stdev=289.68 00:21:34.585 clat (msec): min=23, max=120, avg=63.93, stdev=18.49 00:21:34.585 lat (msec): min=23, max=120, avg=63.96, stdev=18.48 00:21:34.585 clat percentiles (msec): 00:21:34.585 | 1.00th=[ 28], 5.00th=[ 33], 10.00th=[ 39], 20.00th=[ 48], 00:21:34.585 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 69], 60.00th=[ 72], 00:21:34.585 | 70.00th=[ 75], 80.00th=[ 80], 90.00th=[ 85], 95.00th=[ 94], 00:21:34.585 | 99.00th=[ 105], 99.50th=[ 112], 99.90th=[ 121], 99.95th=[ 121], 00:21:34.585 | 99.99th=[ 121] 00:21:34.585 bw ( KiB/s): min= 696, max= 1536, per=4.05%, avg=997.35, stdev=170.84, samples=20 00:21:34.585 iops : min= 174, max= 384, avg=249.30, stdev=42.72, samples=20 00:21:34.585 lat (msec) : 50=28.66%, 100=69.46%, 250=1.88% 00:21:34.585 cpu : usr=42.68%, sys=2.34%, ctx=1371, majf=0, minf=9 00:21:34.585 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=77.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:21:34.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.585 complete : 0=0.0%, 4=88.8%, 8=10.0%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.585 issued rwts: total=2505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.585 filename0: (groupid=0, jobs=1): err= 0: pid=83766: Tue Oct 15 08:32:34 2024 00:21:34.585 read: IOPS=260, BW=1044KiB/s (1069kB/s)(10.2MiB/10014msec) 00:21:34.585 slat (usec): min=4, max=8034, avg=32.14, stdev=361.60 00:21:34.585 clat (msec): min=15, max=119, avg=61.18, stdev=18.06 00:21:34.585 lat (msec): min=15, max=119, avg=61.21, stdev=18.06 00:21:34.585 clat percentiles (msec): 00:21:34.585 | 1.00th=[ 24], 5.00th=[ 28], 10.00th=[ 36], 20.00th=[ 48], 00:21:34.585 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 72], 00:21:34.585 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 84], 95.00th=[ 85], 00:21:34.585 | 99.00th=[ 106], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 121], 00:21:34.585 | 99.99th=[ 121] 00:21:34.585 bw ( KiB/s): min= 872, max= 1848, per=4.22%, avg=1040.84, stdev=221.80, samples=19 00:21:34.585 iops : min= 218, max= 462, avg=260.21, stdev=55.45, samples=19 00:21:34.585 lat (msec) : 20=0.11%, 50=34.40%, 100=64.37%, 250=1.11% 00:21:34.585 cpu : usr=31.50%, sys=1.71%, ctx=987, majf=0, minf=9 00:21:34.585 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:34.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.585 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.585 issued rwts: total=2613,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.585 filename0: (groupid=0, jobs=1): err= 0: pid=83767: Tue Oct 15 08:32:34 2024 00:21:34.585 read: IOPS=262, BW=1051KiB/s (1076kB/s)(10.3MiB/10021msec) 00:21:34.585 slat (usec): min=4, max=8025, avg=22.25, stdev=217.42 00:21:34.585 clat (msec): min=21, max=120, avg=60.80, stdev=17.49 00:21:34.585 lat (msec): min=21, max=120, avg=60.82, stdev=17.50 00:21:34.585 clat percentiles (msec): 00:21:34.585 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 48], 00:21:34.585 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 61], 60.00th=[ 71], 00:21:34.585 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 83], 95.00th=[ 85], 00:21:34.585 | 99.00th=[ 106], 99.50th=[ 109], 99.90th=[ 122], 99.95th=[ 122], 00:21:34.585 | 99.99th=[ 122] 00:21:34.585 bw ( KiB/s): min= 848, max= 1800, per=4.25%, avg=1048.35, stdev=209.97, samples=20 00:21:34.585 iops : min= 212, max= 450, avg=262.05, stdev=52.49, samples=20 00:21:34.585 lat (msec) : 50=34.92%, 100=64.06%, 250=1.03% 00:21:34.585 cpu : usr=36.18%, sys=1.99%, ctx=1038, majf=0, minf=9 00:21:34.585 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=81.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:34.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.585 complete : 0=0.0%, 4=87.5%, 8=11.9%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.585 issued rwts: total=2632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.585 filename0: (groupid=0, jobs=1): err= 0: pid=83768: Tue Oct 15 08:32:34 2024 00:21:34.585 read: IOPS=259, BW=1038KiB/s (1063kB/s)(10.2MiB/10027msec) 00:21:34.585 slat (usec): min=4, max=8023, avg=28.66, stdev=306.95 00:21:34.585 clat (msec): min=19, max=123, avg=61.45, stdev=17.86 00:21:34.585 lat (msec): min=20, max=123, avg=61.48, stdev=17.85 00:21:34.585 clat percentiles (msec): 00:21:34.585 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 47], 00:21:34.585 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 64], 60.00th=[ 71], 00:21:34.585 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 83], 95.00th=[ 88], 00:21:34.585 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 124], 99.95th=[ 124], 00:21:34.585 | 99.99th=[ 124] 00:21:34.585 bw ( KiB/s): min= 896, max= 1768, per=4.21%, avg=1037.20, stdev=197.56, samples=20 00:21:34.585 iops : min= 224, max= 442, avg=259.30, stdev=49.39, samples=20 00:21:34.585 lat (msec) : 20=0.04%, 50=32.89%, 100=65.92%, 250=1.15% 00:21:34.585 cpu : usr=37.79%, sys=2.08%, ctx=1162, majf=0, minf=9 00:21:34.585 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:21:34.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.585 complete : 0=0.0%, 4=87.2%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.585 issued rwts: total=2603,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.585 filename0: (groupid=0, jobs=1): err= 0: pid=83769: Tue Oct 15 08:32:34 2024 00:21:34.585 read: IOPS=257, BW=1030KiB/s (1055kB/s)(10.1MiB/10074msec) 00:21:34.585 slat (usec): min=5, max=4044, avg=16.27, stdev=111.75 00:21:34.585 clat (usec): min=1572, max=116727, avg=61879.63, stdev=22978.14 00:21:34.585 lat (usec): min=1579, max=116755, avg=61895.90, stdev=22978.79 00:21:34.585 clat percentiles (usec): 00:21:34.585 | 1.00th=[ 1713], 5.00th=[ 7963], 10.00th=[ 34866], 20.00th=[ 45876], 00:21:34.585 | 30.00th=[ 49546], 40.00th=[ 57410], 50.00th=[ 67634], 60.00th=[ 71828], 00:21:34.585 | 70.00th=[ 74974], 80.00th=[ 79168], 90.00th=[ 87557], 95.00th=[ 94897], 00:21:34.585 | 99.00th=[109577], 99.50th=[115868], 99.90th=[116917], 99.95th=[116917], 00:21:34.585 | 99.99th=[116917] 00:21:34.585 bw ( KiB/s): min= 768, max= 2416, per=4.19%, avg=1033.30, stdev=346.60, samples=20 00:21:34.585 iops : min= 192, max= 604, avg=258.30, stdev=86.66, samples=20 00:21:34.585 lat (msec) : 2=2.47%, 4=0.62%, 10=2.47%, 20=0.62%, 50=24.52% 00:21:34.585 lat (msec) : 100=66.38%, 250=2.93% 00:21:34.585 cpu : usr=40.83%, sys=2.73%, ctx=1270, majf=0, minf=0 00:21:34.585 IO depths : 1=0.3%, 2=1.9%, 4=7.1%, 8=75.4%, 16=15.3%, 32=0.0%, >=64=0.0% 00:21:34.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.585 complete : 0=0.0%, 4=89.3%, 8=9.1%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.585 issued rwts: total=2594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.585 filename0: (groupid=0, jobs=1): err= 0: pid=83770: Tue Oct 15 08:32:34 2024 00:21:34.585 read: IOPS=258, BW=1036KiB/s (1061kB/s)(10.2MiB/10036msec) 00:21:34.585 slat (usec): min=7, max=9028, avg=26.28, stdev=262.34 00:21:34.585 clat (msec): min=18, max=120, avg=61.59, stdev=17.29 00:21:34.585 lat (msec): min=18, max=120, avg=61.62, stdev=17.30 00:21:34.585 clat percentiles (msec): 00:21:34.585 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 42], 20.00th=[ 47], 00:21:34.585 | 30.00th=[ 50], 40.00th=[ 54], 50.00th=[ 63], 60.00th=[ 70], 00:21:34.585 | 70.00th=[ 73], 80.00th=[ 78], 90.00th=[ 82], 95.00th=[ 88], 00:21:34.585 | 99.00th=[ 105], 99.50th=[ 111], 99.90th=[ 121], 99.95th=[ 121], 00:21:34.585 | 99.99th=[ 121] 00:21:34.585 bw ( KiB/s): min= 840, max= 1528, per=4.19%, avg=1033.25, stdev=157.42, samples=20 00:21:34.585 iops : min= 210, max= 382, avg=258.30, stdev=39.36, samples=20 00:21:34.585 lat (msec) : 20=0.23%, 50=31.94%, 100=66.10%, 250=1.73% 00:21:34.585 cpu : usr=38.58%, sys=2.58%, ctx=1354, majf=0, minf=9 00:21:34.585 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=81.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:34.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.585 complete : 0=0.0%, 4=87.6%, 8=11.9%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.585 issued rwts: total=2599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.586 filename1: (groupid=0, jobs=1): err= 0: pid=83771: Tue Oct 15 08:32:34 2024 00:21:34.586 read: IOPS=265, BW=1061KiB/s (1087kB/s)(10.4MiB/10012msec) 00:21:34.586 slat (usec): min=4, max=8030, avg=29.67, stdev=347.39 00:21:34.586 clat (msec): min=13, max=120, avg=60.19, stdev=18.07 00:21:34.586 lat (msec): min=13, max=120, avg=60.22, stdev=18.08 00:21:34.586 clat percentiles (msec): 00:21:34.586 | 1.00th=[ 23], 5.00th=[ 27], 10.00th=[ 36], 20.00th=[ 47], 00:21:34.586 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 61], 60.00th=[ 71], 00:21:34.586 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 84], 95.00th=[ 85], 00:21:34.586 | 99.00th=[ 105], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 121], 00:21:34.586 | 99.99th=[ 121] 00:21:34.586 bw ( KiB/s): min= 864, max= 1888, per=4.28%, avg=1054.74, stdev=228.31, samples=19 00:21:34.586 iops : min= 216, max= 472, avg=263.68, stdev=57.08, samples=19 00:21:34.586 lat (msec) : 20=0.26%, 50=37.20%, 100=61.48%, 250=1.05% 00:21:34.586 cpu : usr=31.27%, sys=1.94%, ctx=900, majf=0, minf=9 00:21:34.586 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.3%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:34.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.586 complete : 0=0.0%, 4=87.0%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.586 issued rwts: total=2656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.586 filename1: (groupid=0, jobs=1): err= 0: pid=83772: Tue Oct 15 08:32:34 2024 00:21:34.586 read: IOPS=249, BW=997KiB/s (1021kB/s)(9.78MiB/10050msec) 00:21:34.586 slat (usec): min=7, max=9044, avg=20.62, stdev=241.26 00:21:34.586 clat (msec): min=7, max=145, avg=64.06, stdev=19.13 00:21:34.586 lat (msec): min=7, max=145, avg=64.08, stdev=19.13 00:21:34.586 clat percentiles (msec): 00:21:34.586 | 1.00th=[ 14], 5.00th=[ 30], 10.00th=[ 35], 20.00th=[ 48], 00:21:34.586 | 30.00th=[ 56], 40.00th=[ 62], 50.00th=[ 70], 60.00th=[ 72], 00:21:34.586 | 70.00th=[ 73], 80.00th=[ 81], 90.00th=[ 84], 95.00th=[ 93], 00:21:34.586 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 121], 99.95th=[ 121], 00:21:34.586 | 99.99th=[ 146] 00:21:34.586 bw ( KiB/s): min= 784, max= 1568, per=4.03%, avg=994.30, stdev=207.53, samples=20 00:21:34.586 iops : min= 196, max= 392, avg=248.55, stdev=51.90, samples=20 00:21:34.586 lat (msec) : 10=0.08%, 20=1.40%, 50=25.56%, 100=71.13%, 250=1.84% 00:21:34.586 cpu : usr=34.18%, sys=2.04%, ctx=1065, majf=0, minf=9 00:21:34.586 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=79.8%, 16=16.6%, 32=0.0%, >=64=0.0% 00:21:34.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.586 complete : 0=0.0%, 4=88.5%, 8=10.9%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.586 issued rwts: total=2504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.586 filename1: (groupid=0, jobs=1): err= 0: pid=83773: Tue Oct 15 08:32:34 2024 00:21:34.586 read: IOPS=263, BW=1053KiB/s (1078kB/s)(10.3MiB/10001msec) 00:21:34.586 slat (usec): min=3, max=9029, avg=30.42, stdev=358.41 00:21:34.586 clat (msec): min=2, max=119, avg=60.65, stdev=17.84 00:21:34.586 lat (msec): min=2, max=119, avg=60.68, stdev=17.84 00:21:34.586 clat percentiles (msec): 00:21:34.586 | 1.00th=[ 5], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 48], 00:21:34.586 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 61], 60.00th=[ 71], 00:21:34.586 | 70.00th=[ 72], 80.00th=[ 74], 90.00th=[ 84], 95.00th=[ 85], 00:21:34.586 | 99.00th=[ 99], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 121], 00:21:34.586 | 99.99th=[ 121] 00:21:34.586 bw ( KiB/s): min= 864, max= 1584, per=4.19%, avg=1032.00, stdev=159.31, samples=19 00:21:34.586 iops : min= 216, max= 396, avg=258.00, stdev=39.83, samples=19 00:21:34.586 lat (msec) : 4=0.68%, 10=0.76%, 20=0.23%, 50=35.55%, 100=61.91% 00:21:34.586 lat (msec) : 250=0.87% 00:21:34.586 cpu : usr=31.51%, sys=1.75%, ctx=994, majf=0, minf=9 00:21:34.586 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=80.6%, 16=15.4%, 32=0.0%, >=64=0.0% 00:21:34.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.586 complete : 0=0.0%, 4=87.6%, 8=11.6%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.586 issued rwts: total=2633,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.586 filename1: (groupid=0, jobs=1): err= 0: pid=83774: Tue Oct 15 08:32:34 2024 00:21:34.586 read: IOPS=261, BW=1046KiB/s (1072kB/s)(10.3MiB/10045msec) 00:21:34.586 slat (usec): min=6, max=8041, avg=28.63, stdev=317.54 00:21:34.586 clat (msec): min=21, max=119, avg=60.98, stdev=18.47 00:21:34.586 lat (msec): min=21, max=119, avg=61.00, stdev=18.48 00:21:34.586 clat percentiles (msec): 00:21:34.586 | 1.00th=[ 24], 5.00th=[ 26], 10.00th=[ 36], 20.00th=[ 46], 00:21:34.586 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 63], 60.00th=[ 71], 00:21:34.586 | 70.00th=[ 73], 80.00th=[ 78], 90.00th=[ 82], 95.00th=[ 86], 00:21:34.586 | 99.00th=[ 105], 99.50th=[ 110], 99.90th=[ 121], 99.95th=[ 121], 00:21:34.586 | 99.99th=[ 121] 00:21:34.586 bw ( KiB/s): min= 816, max= 1776, per=4.24%, avg=1044.80, stdev=226.50, samples=20 00:21:34.586 iops : min= 204, max= 444, avg=261.20, stdev=56.63, samples=20 00:21:34.586 lat (msec) : 50=32.31%, 100=66.40%, 250=1.29% 00:21:34.586 cpu : usr=39.69%, sys=2.20%, ctx=1201, majf=0, minf=9 00:21:34.586 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.4%, 16=16.1%, 32=0.0%, >=64=0.0% 00:21:34.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.586 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.586 issued rwts: total=2628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.586 filename1: (groupid=0, jobs=1): err= 0: pid=83775: Tue Oct 15 08:32:34 2024 00:21:34.586 read: IOPS=266, BW=1064KiB/s (1090kB/s)(10.4MiB/10016msec) 00:21:34.586 slat (usec): min=4, max=8026, avg=28.32, stdev=245.54 00:21:34.586 clat (msec): min=16, max=119, avg=60.01, stdev=18.08 00:21:34.586 lat (msec): min=16, max=119, avg=60.04, stdev=18.08 00:21:34.586 clat percentiles (msec): 00:21:34.586 | 1.00th=[ 21], 5.00th=[ 28], 10.00th=[ 36], 20.00th=[ 47], 00:21:34.586 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 61], 60.00th=[ 69], 00:21:34.586 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 82], 95.00th=[ 86], 00:21:34.586 | 99.00th=[ 104], 99.50th=[ 106], 99.90th=[ 121], 99.95th=[ 121], 00:21:34.586 | 99.99th=[ 121] 00:21:34.586 bw ( KiB/s): min= 848, max= 1880, per=4.30%, avg=1059.70, stdev=226.83, samples=20 00:21:34.586 iops : min= 212, max= 470, avg=264.90, stdev=56.70, samples=20 00:21:34.586 lat (msec) : 20=0.75%, 50=33.96%, 100=63.94%, 250=1.35% 00:21:34.586 cpu : usr=41.13%, sys=2.47%, ctx=1233, majf=0, minf=9 00:21:34.586 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=83.6%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:34.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.586 complete : 0=0.0%, 4=86.9%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.586 issued rwts: total=2665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.586 filename1: (groupid=0, jobs=1): err= 0: pid=83776: Tue Oct 15 08:32:34 2024 00:21:34.586 read: IOPS=259, BW=1038KiB/s (1063kB/s)(10.2MiB/10042msec) 00:21:34.586 slat (usec): min=4, max=8029, avg=27.65, stdev=293.52 00:21:34.586 clat (msec): min=14, max=121, avg=61.47, stdev=17.81 00:21:34.586 lat (msec): min=14, max=122, avg=61.50, stdev=17.81 00:21:34.586 clat percentiles (msec): 00:21:34.586 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 48], 00:21:34.586 | 30.00th=[ 49], 40.00th=[ 55], 50.00th=[ 64], 60.00th=[ 72], 00:21:34.586 | 70.00th=[ 73], 80.00th=[ 77], 90.00th=[ 83], 95.00th=[ 87], 00:21:34.586 | 99.00th=[ 105], 99.50th=[ 110], 99.90th=[ 114], 99.95th=[ 114], 00:21:34.586 | 99.99th=[ 123] 00:21:34.586 bw ( KiB/s): min= 816, max= 1800, per=4.20%, avg=1035.15, stdev=216.55, samples=20 00:21:34.586 iops : min= 204, max= 450, avg=258.75, stdev=54.14, samples=20 00:21:34.586 lat (msec) : 20=0.04%, 50=33.47%, 100=65.22%, 250=1.27% 00:21:34.586 cpu : usr=37.60%, sys=2.26%, ctx=1141, majf=0, minf=9 00:21:34.586 IO depths : 1=0.1%, 2=0.4%, 4=1.8%, 8=82.0%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:34.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.586 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.586 issued rwts: total=2605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.586 filename1: (groupid=0, jobs=1): err= 0: pid=83777: Tue Oct 15 08:32:34 2024 00:21:34.586 read: IOPS=255, BW=1023KiB/s (1047kB/s)(10.0MiB/10048msec) 00:21:34.586 slat (usec): min=3, max=8030, avg=20.42, stdev=223.53 00:21:34.586 clat (msec): min=11, max=123, avg=62.45, stdev=19.90 00:21:34.586 lat (msec): min=11, max=123, avg=62.47, stdev=19.90 00:21:34.586 clat percentiles (msec): 00:21:34.586 | 1.00th=[ 15], 5.00th=[ 24], 10.00th=[ 35], 20.00th=[ 48], 00:21:34.586 | 30.00th=[ 50], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:21:34.586 | 70.00th=[ 73], 80.00th=[ 80], 90.00th=[ 85], 95.00th=[ 88], 00:21:34.586 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 125], 99.95th=[ 125], 00:21:34.586 | 99.99th=[ 125] 00:21:34.586 bw ( KiB/s): min= 792, max= 1736, per=4.14%, avg=1021.50, stdev=229.77, samples=20 00:21:34.586 iops : min= 198, max= 434, avg=255.35, stdev=57.46, samples=20 00:21:34.586 lat (msec) : 20=1.99%, 50=29.82%, 100=66.49%, 250=1.71% 00:21:34.586 cpu : usr=31.88%, sys=1.58%, ctx=1024, majf=0, minf=9 00:21:34.586 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=81.2%, 16=16.5%, 32=0.0%, >=64=0.0% 00:21:34.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.586 complete : 0=0.0%, 4=88.0%, 8=11.6%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.586 issued rwts: total=2569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.586 filename1: (groupid=0, jobs=1): err= 0: pid=83778: Tue Oct 15 08:32:34 2024 00:21:34.586 read: IOPS=251, BW=1005KiB/s (1030kB/s)(9.86MiB/10038msec) 00:21:34.586 slat (usec): min=4, max=8057, avg=20.42, stdev=226.01 00:21:34.586 clat (msec): min=21, max=120, avg=63.51, stdev=18.22 00:21:34.586 lat (msec): min=21, max=120, avg=63.53, stdev=18.22 00:21:34.586 clat percentiles (msec): 00:21:34.586 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 48], 00:21:34.586 | 30.00th=[ 51], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:21:34.586 | 70.00th=[ 73], 80.00th=[ 81], 90.00th=[ 85], 95.00th=[ 86], 00:21:34.586 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 121], 00:21:34.586 | 99.99th=[ 121] 00:21:34.586 bw ( KiB/s): min= 840, max= 1704, per=4.07%, avg=1002.70, stdev=204.25, samples=20 00:21:34.586 iops : min= 210, max= 426, avg=250.65, stdev=51.07, samples=20 00:21:34.586 lat (msec) : 50=29.92%, 100=68.81%, 250=1.27% 00:21:34.586 cpu : usr=31.39%, sys=1.81%, ctx=928, majf=0, minf=9 00:21:34.586 IO depths : 1=0.1%, 2=0.6%, 4=2.6%, 8=80.4%, 16=16.3%, 32=0.0%, >=64=0.0% 00:21:34.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.586 complete : 0=0.0%, 4=88.1%, 8=11.3%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.586 issued rwts: total=2523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.586 filename2: (groupid=0, jobs=1): err= 0: pid=83779: Tue Oct 15 08:32:34 2024 00:21:34.587 read: IOPS=252, BW=1011KiB/s (1035kB/s)(9.91MiB/10041msec) 00:21:34.587 slat (usec): min=4, max=8034, avg=17.93, stdev=159.30 00:21:34.587 clat (msec): min=19, max=120, avg=63.18, stdev=18.54 00:21:34.587 lat (msec): min=19, max=120, avg=63.20, stdev=18.54 00:21:34.587 clat percentiles (msec): 00:21:34.587 | 1.00th=[ 22], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 47], 00:21:34.587 | 30.00th=[ 53], 40.00th=[ 60], 50.00th=[ 68], 60.00th=[ 72], 00:21:34.587 | 70.00th=[ 75], 80.00th=[ 80], 90.00th=[ 84], 95.00th=[ 88], 00:21:34.587 | 99.00th=[ 105], 99.50th=[ 110], 99.90th=[ 122], 99.95th=[ 122], 00:21:34.587 | 99.99th=[ 122] 00:21:34.587 bw ( KiB/s): min= 816, max= 1768, per=4.09%, avg=1008.00, stdev=215.79, samples=20 00:21:34.587 iops : min= 204, max= 442, avg=252.00, stdev=53.95, samples=20 00:21:34.587 lat (msec) : 20=0.12%, 50=26.61%, 100=71.86%, 250=1.42% 00:21:34.587 cpu : usr=38.37%, sys=2.39%, ctx=1407, majf=0, minf=9 00:21:34.587 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=81.3%, 16=16.4%, 32=0.0%, >=64=0.0% 00:21:34.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.587 complete : 0=0.0%, 4=87.9%, 8=11.7%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.587 issued rwts: total=2537,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.587 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.587 filename2: (groupid=0, jobs=1): err= 0: pid=83780: Tue Oct 15 08:32:34 2024 00:21:34.587 read: IOPS=260, BW=1043KiB/s (1068kB/s)(10.2MiB/10035msec) 00:21:34.587 slat (usec): min=3, max=4026, avg=16.79, stdev=78.59 00:21:34.587 clat (msec): min=13, max=117, avg=61.23, stdev=18.41 00:21:34.587 lat (msec): min=13, max=117, avg=61.24, stdev=18.41 00:21:34.587 clat percentiles (msec): 00:21:34.587 | 1.00th=[ 21], 5.00th=[ 31], 10.00th=[ 35], 20.00th=[ 47], 00:21:34.587 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 65], 60.00th=[ 71], 00:21:34.587 | 70.00th=[ 73], 80.00th=[ 78], 90.00th=[ 83], 95.00th=[ 88], 00:21:34.587 | 99.00th=[ 105], 99.50th=[ 106], 99.90th=[ 118], 99.95th=[ 118], 00:21:34.587 | 99.99th=[ 118] 00:21:34.587 bw ( KiB/s): min= 840, max= 1776, per=4.22%, avg=1040.55, stdev=225.13, samples=20 00:21:34.587 iops : min= 210, max= 444, avg=260.10, stdev=56.26, samples=20 00:21:34.587 lat (msec) : 20=0.73%, 50=31.03%, 100=66.95%, 250=1.30% 00:21:34.587 cpu : usr=41.51%, sys=2.38%, ctx=1340, majf=0, minf=9 00:21:34.587 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=83.1%, 16=16.3%, 32=0.0%, >=64=0.0% 00:21:34.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.587 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.587 issued rwts: total=2617,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.587 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.587 filename2: (groupid=0, jobs=1): err= 0: pid=83781: Tue Oct 15 08:32:34 2024 00:21:34.587 read: IOPS=263, BW=1056KiB/s (1081kB/s)(10.3MiB/10004msec) 00:21:34.587 slat (usec): min=3, max=8028, avg=17.76, stdev=156.03 00:21:34.587 clat (msec): min=4, max=119, avg=60.53, stdev=17.93 00:21:34.587 lat (msec): min=4, max=119, avg=60.54, stdev=17.92 00:21:34.587 clat percentiles (msec): 00:21:34.587 | 1.00th=[ 23], 5.00th=[ 28], 10.00th=[ 36], 20.00th=[ 48], 00:21:34.587 | 30.00th=[ 49], 40.00th=[ 54], 50.00th=[ 61], 60.00th=[ 70], 00:21:34.587 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 82], 95.00th=[ 87], 00:21:34.587 | 99.00th=[ 104], 99.50th=[ 110], 99.90th=[ 121], 99.95th=[ 121], 00:21:34.587 | 99.99th=[ 121] 00:21:34.587 bw ( KiB/s): min= 880, max= 1768, per=4.25%, avg=1047.58, stdev=206.62, samples=19 00:21:34.587 iops : min= 220, max= 442, avg=261.89, stdev=51.65, samples=19 00:21:34.587 lat (msec) : 10=0.34%, 20=0.08%, 50=34.15%, 100=64.22%, 250=1.21% 00:21:34.587 cpu : usr=37.97%, sys=2.24%, ctx=1128, majf=0, minf=9 00:21:34.587 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=82.0%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:34.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.587 complete : 0=0.0%, 4=87.3%, 8=12.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.587 issued rwts: total=2641,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.587 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.587 filename2: (groupid=0, jobs=1): err= 0: pid=83782: Tue Oct 15 08:32:34 2024 00:21:34.587 read: IOPS=258, BW=1033KiB/s (1058kB/s)(10.1MiB/10049msec) 00:21:34.587 slat (usec): min=5, max=8022, avg=24.29, stdev=242.04 00:21:34.587 clat (msec): min=11, max=122, avg=61.76, stdev=19.99 00:21:34.587 lat (msec): min=11, max=122, avg=61.79, stdev=20.00 00:21:34.587 clat percentiles (msec): 00:21:34.587 | 1.00th=[ 15], 5.00th=[ 25], 10.00th=[ 33], 20.00th=[ 47], 00:21:34.587 | 30.00th=[ 50], 40.00th=[ 58], 50.00th=[ 68], 60.00th=[ 72], 00:21:34.587 | 70.00th=[ 74], 80.00th=[ 79], 90.00th=[ 84], 95.00th=[ 88], 00:21:34.587 | 99.00th=[ 108], 99.50th=[ 110], 99.90th=[ 123], 99.95th=[ 123], 00:21:34.587 | 99.99th=[ 123] 00:21:34.587 bw ( KiB/s): min= 755, max= 1720, per=4.19%, avg=1033.45, stdev=269.09, samples=20 00:21:34.587 iops : min= 188, max= 430, avg=258.30, stdev=67.25, samples=20 00:21:34.587 lat (msec) : 20=2.31%, 50=28.47%, 100=67.64%, 250=1.58% 00:21:34.587 cpu : usr=38.25%, sys=2.77%, ctx=1142, majf=0, minf=9 00:21:34.587 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=82.6%, 16=16.7%, 32=0.0%, >=64=0.0% 00:21:34.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.587 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.587 issued rwts: total=2596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.587 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.587 filename2: (groupid=0, jobs=1): err= 0: pid=83783: Tue Oct 15 08:32:34 2024 00:21:34.587 read: IOPS=257, BW=1029KiB/s (1053kB/s)(10.1MiB/10024msec) 00:21:34.587 slat (usec): min=4, max=9029, avg=23.55, stdev=224.11 00:21:34.587 clat (msec): min=17, max=126, avg=62.04, stdev=16.95 00:21:34.587 lat (msec): min=17, max=126, avg=62.07, stdev=16.95 00:21:34.587 clat percentiles (msec): 00:21:34.587 | 1.00th=[ 25], 5.00th=[ 34], 10.00th=[ 40], 20.00th=[ 48], 00:21:34.587 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 63], 60.00th=[ 71], 00:21:34.587 | 70.00th=[ 73], 80.00th=[ 78], 90.00th=[ 83], 95.00th=[ 87], 00:21:34.587 | 99.00th=[ 104], 99.50th=[ 107], 99.90th=[ 127], 99.95th=[ 127], 00:21:34.587 | 99.99th=[ 127] 00:21:34.587 bw ( KiB/s): min= 848, max= 1584, per=4.16%, avg=1026.35, stdev=166.67, samples=20 00:21:34.587 iops : min= 212, max= 396, avg=256.55, stdev=41.67, samples=20 00:21:34.587 lat (msec) : 20=0.23%, 50=30.14%, 100=68.27%, 250=1.36% 00:21:34.587 cpu : usr=40.64%, sys=2.52%, ctx=1218, majf=0, minf=9 00:21:34.587 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:21:34.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.587 complete : 0=0.0%, 4=87.8%, 8=11.6%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.587 issued rwts: total=2578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.587 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.587 filename2: (groupid=0, jobs=1): err= 0: pid=83784: Tue Oct 15 08:32:34 2024 00:21:34.587 read: IOPS=259, BW=1039KiB/s (1064kB/s)(10.2MiB/10008msec) 00:21:34.587 slat (usec): min=3, max=8037, avg=31.71, stdev=338.10 00:21:34.587 clat (msec): min=15, max=120, avg=61.47, stdev=16.39 00:21:34.587 lat (msec): min=15, max=120, avg=61.50, stdev=16.40 00:21:34.587 clat percentiles (msec): 00:21:34.587 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 48], 00:21:34.587 | 30.00th=[ 50], 40.00th=[ 54], 50.00th=[ 61], 60.00th=[ 69], 00:21:34.587 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 82], 95.00th=[ 85], 00:21:34.587 | 99.00th=[ 105], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 121], 00:21:34.587 | 99.99th=[ 121] 00:21:34.587 bw ( KiB/s): min= 896, max= 1552, per=4.19%, avg=1033.68, stdev=144.80, samples=19 00:21:34.587 iops : min= 224, max= 388, avg=258.42, stdev=36.20, samples=19 00:21:34.587 lat (msec) : 20=0.35%, 50=32.97%, 100=65.68%, 250=1.00% 00:21:34.587 cpu : usr=36.07%, sys=2.07%, ctx=1211, majf=0, minf=9 00:21:34.587 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=80.2%, 16=15.3%, 32=0.0%, >=64=0.0% 00:21:34.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.587 complete : 0=0.0%, 4=87.7%, 8=11.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.587 issued rwts: total=2599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.587 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.587 filename2: (groupid=0, jobs=1): err= 0: pid=83785: Tue Oct 15 08:32:34 2024 00:21:34.587 read: IOPS=249, BW=1000KiB/s (1024kB/s)(9.81MiB/10048msec) 00:21:34.587 slat (usec): min=7, max=8035, avg=47.39, stdev=510.93 00:21:34.587 clat (msec): min=11, max=155, avg=63.74, stdev=18.85 00:21:34.587 lat (msec): min=11, max=155, avg=63.79, stdev=18.86 00:21:34.587 clat percentiles (msec): 00:21:34.587 | 1.00th=[ 17], 5.00th=[ 28], 10.00th=[ 36], 20.00th=[ 48], 00:21:34.587 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:21:34.587 | 70.00th=[ 73], 80.00th=[ 81], 90.00th=[ 84], 95.00th=[ 93], 00:21:34.587 | 99.00th=[ 108], 99.50th=[ 110], 99.90th=[ 121], 99.95th=[ 121], 00:21:34.587 | 99.99th=[ 157] 00:21:34.587 bw ( KiB/s): min= 808, max= 1504, per=4.06%, avg=999.55, stdev=186.06, samples=20 00:21:34.587 iops : min= 202, max= 376, avg=249.85, stdev=46.52, samples=20 00:21:34.587 lat (msec) : 20=1.19%, 50=26.68%, 100=70.29%, 250=1.83% 00:21:34.587 cpu : usr=31.37%, sys=1.87%, ctx=917, majf=0, minf=9 00:21:34.587 IO depths : 1=0.1%, 2=0.7%, 4=3.1%, 8=79.8%, 16=16.4%, 32=0.0%, >=64=0.0% 00:21:34.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.587 complete : 0=0.0%, 4=88.3%, 8=11.0%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.587 issued rwts: total=2511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.587 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.587 filename2: (groupid=0, jobs=1): err= 0: pid=83786: Tue Oct 15 08:32:34 2024 00:21:34.587 read: IOPS=254, BW=1020KiB/s (1044kB/s)(10.0MiB/10046msec) 00:21:34.587 slat (usec): min=4, max=8029, avg=21.95, stdev=237.50 00:21:34.587 clat (msec): min=14, max=119, avg=62.59, stdev=19.06 00:21:34.587 lat (msec): min=14, max=119, avg=62.61, stdev=19.06 00:21:34.587 clat percentiles (msec): 00:21:34.587 | 1.00th=[ 22], 5.00th=[ 27], 10.00th=[ 35], 20.00th=[ 47], 00:21:34.587 | 30.00th=[ 51], 40.00th=[ 60], 50.00th=[ 70], 60.00th=[ 72], 00:21:34.587 | 70.00th=[ 73], 80.00th=[ 81], 90.00th=[ 84], 95.00th=[ 87], 00:21:34.587 | 99.00th=[ 105], 99.50th=[ 110], 99.90th=[ 120], 99.95th=[ 120], 00:21:34.587 | 99.99th=[ 120] 00:21:34.587 bw ( KiB/s): min= 760, max= 1672, per=4.13%, avg=1017.70, stdev=222.19, samples=20 00:21:34.587 iops : min= 190, max= 418, avg=254.40, stdev=55.56, samples=20 00:21:34.587 lat (msec) : 20=0.47%, 50=29.56%, 100=68.65%, 250=1.33% 00:21:34.587 cpu : usr=33.22%, sys=1.94%, ctx=1008, majf=0, minf=9 00:21:34.587 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=81.9%, 16=16.5%, 32=0.0%, >=64=0.0% 00:21:34.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.587 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:34.587 issued rwts: total=2561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:34.587 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:34.588 00:21:34.588 Run status group 0 (all jobs): 00:21:34.588 READ: bw=24.1MiB/s (25.2MB/s), 997KiB/s-1064KiB/s (1021kB/s-1090kB/s), io=242MiB (254MB), run=10001-10074msec 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:34.588 bdev_null0 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:34.588 [2024-10-15 08:32:34.909477] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:34.588 bdev_null1 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:34.588 { 00:21:34.588 "params": { 00:21:34.588 "name": "Nvme$subsystem", 00:21:34.588 "trtype": "$TEST_TRANSPORT", 00:21:34.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.588 "adrfam": "ipv4", 00:21:34.588 "trsvcid": "$NVMF_PORT", 00:21:34.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.588 "hdgst": ${hdgst:-false}, 00:21:34.588 "ddgst": ${ddgst:-false} 00:21:34.588 }, 00:21:34.588 "method": "bdev_nvme_attach_controller" 00:21:34.588 } 00:21:34.588 EOF 00:21:34.588 )") 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:34.588 08:32:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:34.588 { 00:21:34.588 "params": { 00:21:34.588 "name": "Nvme$subsystem", 00:21:34.588 "trtype": "$TEST_TRANSPORT", 00:21:34.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.588 "adrfam": "ipv4", 00:21:34.588 "trsvcid": "$NVMF_PORT", 00:21:34.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.589 "hdgst": ${hdgst:-false}, 00:21:34.589 "ddgst": ${ddgst:-false} 00:21:34.589 }, 00:21:34.589 "method": "bdev_nvme_attach_controller" 00:21:34.589 } 00:21:34.589 EOF 00:21:34.589 )") 00:21:34.589 08:32:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:34.589 08:32:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:21:34.589 08:32:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:21:34.589 08:32:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:21:34.589 08:32:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:34.589 "params": { 00:21:34.589 "name": "Nvme0", 00:21:34.589 "trtype": "tcp", 00:21:34.589 "traddr": "10.0.0.3", 00:21:34.589 "adrfam": "ipv4", 00:21:34.589 "trsvcid": "4420", 00:21:34.589 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:34.589 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:34.589 "hdgst": false, 00:21:34.589 "ddgst": false 00:21:34.589 }, 00:21:34.589 "method": "bdev_nvme_attach_controller" 00:21:34.589 },{ 00:21:34.589 "params": { 00:21:34.589 "name": "Nvme1", 00:21:34.589 "trtype": "tcp", 00:21:34.589 "traddr": "10.0.0.3", 00:21:34.589 "adrfam": "ipv4", 00:21:34.589 "trsvcid": "4420", 00:21:34.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:34.589 "hdgst": false, 00:21:34.589 "ddgst": false 00:21:34.589 }, 00:21:34.589 "method": "bdev_nvme_attach_controller" 00:21:34.589 }' 00:21:34.589 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:34.589 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:34.589 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:34.589 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:34.589 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:34.589 08:32:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:34.589 08:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:34.589 08:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:34.589 08:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:34.589 08:32:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:34.589 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:34.589 ... 00:21:34.589 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:34.589 ... 00:21:34.589 fio-3.35 00:21:34.589 Starting 4 threads 00:21:39.864 00:21:39.864 filename0: (groupid=0, jobs=1): err= 0: pid=83933: Tue Oct 15 08:32:40 2024 00:21:39.864 read: IOPS=2201, BW=17.2MiB/s (18.0MB/s)(86.0MiB/5002msec) 00:21:39.864 slat (nsec): min=6382, max=84027, avg=12155.51, stdev=7537.99 00:21:39.864 clat (usec): min=445, max=10763, avg=3601.71, stdev=996.73 00:21:39.864 lat (usec): min=456, max=10770, avg=3613.87, stdev=996.19 00:21:39.864 clat percentiles (usec): 00:21:39.864 | 1.00th=[ 1172], 5.00th=[ 1909], 10.00th=[ 2311], 20.00th=[ 2966], 00:21:39.864 | 30.00th=[ 3163], 40.00th=[ 3326], 50.00th=[ 3523], 60.00th=[ 3687], 00:21:39.864 | 70.00th=[ 4113], 80.00th=[ 4424], 90.00th=[ 4883], 95.00th=[ 5211], 00:21:39.864 | 99.00th=[ 6259], 99.50th=[ 6783], 99.90th=[ 6980], 99.95th=[ 7898], 00:21:39.864 | 99.99th=[ 9896] 00:21:39.864 bw ( KiB/s): min=14128, max=20208, per=26.41%, avg=17583.44, stdev=1805.11, samples=9 00:21:39.864 iops : min= 1766, max= 2526, avg=2197.89, stdev=225.63, samples=9 00:21:39.864 lat (usec) : 500=0.03%, 750=0.08%, 1000=0.33% 00:21:39.864 lat (msec) : 2=5.79%, 4=62.12%, 10=31.64%, 20=0.01% 00:21:39.864 cpu : usr=91.86%, sys=7.20%, ctx=14, majf=0, minf=0 00:21:39.865 IO depths : 1=0.1%, 2=1.9%, 4=67.5%, 8=30.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:39.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:39.865 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:39.865 issued rwts: total=11010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:39.865 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:39.865 filename0: (groupid=0, jobs=1): err= 0: pid=83934: Tue Oct 15 08:32:40 2024 00:21:39.865 read: IOPS=2063, BW=16.1MiB/s (16.9MB/s)(80.6MiB/5003msec) 00:21:39.865 slat (usec): min=3, max=106, avg=18.09, stdev= 9.23 00:21:39.865 clat (usec): min=922, max=10775, avg=3826.09, stdev=945.41 00:21:39.865 lat (usec): min=930, max=10791, avg=3844.18, stdev=945.12 00:21:39.865 clat percentiles (usec): 00:21:39.865 | 1.00th=[ 1729], 5.00th=[ 2311], 10.00th=[ 2769], 20.00th=[ 3097], 00:21:39.865 | 30.00th=[ 3294], 40.00th=[ 3458], 50.00th=[ 3621], 60.00th=[ 4047], 00:21:39.865 | 70.00th=[ 4424], 80.00th=[ 4686], 90.00th=[ 5080], 95.00th=[ 5342], 00:21:39.865 | 99.00th=[ 6259], 99.50th=[ 6783], 99.90th=[ 7570], 99.95th=[ 7832], 00:21:39.865 | 99.99th=[ 9765] 00:21:39.865 bw ( KiB/s): min=13568, max=18096, per=24.71%, avg=16447.67, stdev=1575.20, samples=9 00:21:39.865 iops : min= 1696, max= 2262, avg=2055.89, stdev=196.87, samples=9 00:21:39.865 lat (usec) : 1000=0.14% 00:21:39.865 lat (msec) : 2=2.35%, 4=56.95%, 10=40.55%, 20=0.01% 00:21:39.865 cpu : usr=92.42%, sys=6.58%, ctx=10, majf=0, minf=10 00:21:39.865 IO depths : 1=0.2%, 2=4.3%, 4=65.8%, 8=29.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:39.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:39.865 complete : 0=0.0%, 4=98.3%, 8=1.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:39.865 issued rwts: total=10323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:39.865 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:39.865 filename1: (groupid=0, jobs=1): err= 0: pid=83935: Tue Oct 15 08:32:40 2024 00:21:39.865 read: IOPS=2050, BW=16.0MiB/s (16.8MB/s)(80.1MiB/5001msec) 00:21:39.865 slat (usec): min=4, max=104, avg=17.12, stdev= 9.90 00:21:39.865 clat (usec): min=951, max=10754, avg=3853.65, stdev=936.25 00:21:39.865 lat (usec): min=958, max=10768, avg=3870.77, stdev=935.97 00:21:39.865 clat percentiles (usec): 00:21:39.865 | 1.00th=[ 1876], 5.00th=[ 2343], 10.00th=[ 2835], 20.00th=[ 3097], 00:21:39.865 | 30.00th=[ 3294], 40.00th=[ 3490], 50.00th=[ 3654], 60.00th=[ 4047], 00:21:39.865 | 70.00th=[ 4424], 80.00th=[ 4686], 90.00th=[ 5080], 95.00th=[ 5342], 00:21:39.865 | 99.00th=[ 6259], 99.50th=[ 6783], 99.90th=[ 6980], 99.95th=[ 7898], 00:21:39.865 | 99.99th=[ 9765] 00:21:39.865 bw ( KiB/s): min=13595, max=18096, per=24.50%, avg=16308.44, stdev=1506.12, samples=9 00:21:39.865 iops : min= 1699, max= 2262, avg=2038.44, stdev=188.31, samples=9 00:21:39.865 lat (usec) : 1000=0.10% 00:21:39.865 lat (msec) : 2=1.89%, 4=56.96%, 10=41.04%, 20=0.01% 00:21:39.865 cpu : usr=92.42%, sys=6.58%, ctx=12, majf=0, minf=0 00:21:39.865 IO depths : 1=0.3%, 2=4.3%, 4=65.6%, 8=29.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:39.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:39.865 complete : 0=0.0%, 4=98.3%, 8=1.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:39.865 issued rwts: total=10256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:39.865 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:39.865 filename1: (groupid=0, jobs=1): err= 0: pid=83936: Tue Oct 15 08:32:40 2024 00:21:39.865 read: IOPS=2007, BW=15.7MiB/s (16.4MB/s)(78.4MiB/5001msec) 00:21:39.865 slat (usec): min=3, max=109, avg=16.94, stdev= 9.11 00:21:39.865 clat (usec): min=836, max=10787, avg=3933.00, stdev=934.78 00:21:39.865 lat (usec): min=844, max=10800, avg=3949.93, stdev=934.43 00:21:39.865 clat percentiles (usec): 00:21:39.865 | 1.00th=[ 1860], 5.00th=[ 2409], 10.00th=[ 2966], 20.00th=[ 3163], 00:21:39.865 | 30.00th=[ 3359], 40.00th=[ 3523], 50.00th=[ 3785], 60.00th=[ 4228], 00:21:39.865 | 70.00th=[ 4490], 80.00th=[ 4752], 90.00th=[ 5080], 95.00th=[ 5407], 00:21:39.865 | 99.00th=[ 6325], 99.50th=[ 6783], 99.90th=[ 6980], 99.95th=[ 7832], 00:21:39.865 | 99.99th=[ 9765] 00:21:39.865 bw ( KiB/s): min=14112, max=19024, per=24.22%, avg=16120.56, stdev=1795.17, samples=9 00:21:39.865 iops : min= 1764, max= 2378, avg=2015.00, stdev=224.35, samples=9 00:21:39.865 lat (usec) : 1000=0.06% 00:21:39.865 lat (msec) : 2=2.19%, 4=53.16%, 10=44.58%, 20=0.01% 00:21:39.865 cpu : usr=92.38%, sys=6.68%, ctx=7, majf=0, minf=0 00:21:39.865 IO depths : 1=0.2%, 2=6.5%, 4=64.9%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:39.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:39.865 complete : 0=0.0%, 4=97.5%, 8=2.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:39.865 issued rwts: total=10041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:39.865 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:39.865 00:21:39.865 Run status group 0 (all jobs): 00:21:39.865 READ: bw=65.0MiB/s (68.2MB/s), 15.7MiB/s-17.2MiB/s (16.4MB/s-18.0MB/s), io=325MiB (341MB), run=5001-5003msec 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:39.865 ************************************ 00:21:39.865 END TEST fio_dif_rand_params 00:21:39.865 ************************************ 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.865 00:21:39.865 real 0m23.974s 00:21:39.865 user 2m2.688s 00:21:39.865 sys 0m8.952s 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:39.865 08:32:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:39.865 08:32:41 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:21:39.865 08:32:41 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:39.865 08:32:41 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:39.865 08:32:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:39.865 ************************************ 00:21:39.865 START TEST fio_dif_digest 00:21:39.865 ************************************ 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:39.865 bdev_null0 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:39.865 [2024-10-15 08:32:41.236094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:21:39.865 08:32:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:39.866 { 00:21:39.866 "params": { 00:21:39.866 "name": "Nvme$subsystem", 00:21:39.866 "trtype": "$TEST_TRANSPORT", 00:21:39.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.866 "adrfam": "ipv4", 00:21:39.866 "trsvcid": "$NVMF_PORT", 00:21:39.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.866 "hdgst": ${hdgst:-false}, 00:21:39.866 "ddgst": ${ddgst:-false} 00:21:39.866 }, 00:21:39.866 "method": "bdev_nvme_attach_controller" 00:21:39.866 } 00:21:39.866 EOF 00:21:39.866 )") 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:39.866 "params": { 00:21:39.866 "name": "Nvme0", 00:21:39.866 "trtype": "tcp", 00:21:39.866 "traddr": "10.0.0.3", 00:21:39.866 "adrfam": "ipv4", 00:21:39.866 "trsvcid": "4420", 00:21:39.866 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:39.866 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:39.866 "hdgst": true, 00:21:39.866 "ddgst": true 00:21:39.866 }, 00:21:39.866 "method": "bdev_nvme_attach_controller" 00:21:39.866 }' 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:39.866 08:32:41 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:39.866 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:39.866 ... 00:21:39.866 fio-3.35 00:21:39.866 Starting 3 threads 00:21:52.071 00:21:52.071 filename0: (groupid=0, jobs=1): err= 0: pid=84042: Tue Oct 15 08:32:52 2024 00:21:52.071 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(286MiB/10007msec) 00:21:52.071 slat (nsec): min=5720, max=48140, avg=11460.38, stdev=5708.12 00:21:52.071 clat (usec): min=10756, max=17871, avg=13086.20, stdev=936.00 00:21:52.071 lat (usec): min=10764, max=17886, avg=13097.66, stdev=936.25 00:21:52.071 clat percentiles (usec): 00:21:52.071 | 1.00th=[11076], 5.00th=[11600], 10.00th=[12125], 20.00th=[12387], 00:21:52.071 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12911], 60.00th=[13304], 00:21:52.071 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14091], 95.00th=[14353], 00:21:52.071 | 99.00th=[16581], 99.50th=[16581], 99.90th=[17957], 99.95th=[17957], 00:21:52.071 | 99.99th=[17957] 00:21:52.071 bw ( KiB/s): min=26880, max=32256, per=33.27%, avg=29221.16, stdev=1625.54, samples=19 00:21:52.071 iops : min= 210, max= 252, avg=228.26, stdev=12.67, samples=19 00:21:52.071 lat (msec) : 20=100.00% 00:21:52.071 cpu : usr=88.64%, sys=10.57%, ctx=15, majf=0, minf=9 00:21:52.071 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:52.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.071 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.071 issued rwts: total=2289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.071 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:52.071 filename0: (groupid=0, jobs=1): err= 0: pid=84043: Tue Oct 15 08:32:52 2024 00:21:52.071 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(286MiB/10003msec) 00:21:52.071 slat (nsec): min=6922, max=61965, avg=15376.07, stdev=5200.98 00:21:52.071 clat (usec): min=9298, max=19187, avg=13075.38, stdev=958.56 00:21:52.071 lat (usec): min=9311, max=19203, avg=13090.76, stdev=958.79 00:21:52.071 clat percentiles (usec): 00:21:52.071 | 1.00th=[10945], 5.00th=[11600], 10.00th=[12125], 20.00th=[12387], 00:21:52.071 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 00:21:52.071 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14222], 95.00th=[14353], 00:21:52.071 | 99.00th=[16581], 99.50th=[16581], 99.90th=[19268], 99.95th=[19268], 00:21:52.071 | 99.99th=[19268] 00:21:52.071 bw ( KiB/s): min=26880, max=32256, per=33.27%, avg=29227.63, stdev=1591.15, samples=19 00:21:52.071 iops : min= 210, max= 252, avg=228.32, stdev=12.41, samples=19 00:21:52.071 lat (msec) : 10=0.13%, 20=99.87% 00:21:52.071 cpu : usr=90.54%, sys=8.81%, ctx=10, majf=0, minf=0 00:21:52.071 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:52.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.071 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.071 issued rwts: total=2289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.071 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:52.071 filename0: (groupid=0, jobs=1): err= 0: pid=84044: Tue Oct 15 08:32:52 2024 00:21:52.072 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(286MiB/10004msec) 00:21:52.072 slat (nsec): min=6960, max=59706, avg=14725.36, stdev=4982.38 00:21:52.072 clat (usec): min=9303, max=19196, avg=13078.67, stdev=959.10 00:21:52.072 lat (usec): min=9317, max=19208, avg=13093.39, stdev=959.16 00:21:52.072 clat percentiles (usec): 00:21:52.072 | 1.00th=[10945], 5.00th=[11600], 10.00th=[12125], 20.00th=[12387], 00:21:52.072 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 00:21:52.072 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14222], 95.00th=[14353], 00:21:52.072 | 99.00th=[16581], 99.50th=[16581], 99.90th=[19268], 99.95th=[19268], 00:21:52.072 | 99.99th=[19268] 00:21:52.072 bw ( KiB/s): min=26880, max=32256, per=33.27%, avg=29224.42, stdev=1587.90, samples=19 00:21:52.072 iops : min= 210, max= 252, avg=228.32, stdev=12.41, samples=19 00:21:52.072 lat (msec) : 10=0.13%, 20=99.87% 00:21:52.072 cpu : usr=91.18%, sys=8.23%, ctx=11, majf=0, minf=0 00:21:52.072 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:52.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.072 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.072 issued rwts: total=2289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.072 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:52.072 00:21:52.072 Run status group 0 (all jobs): 00:21:52.072 READ: bw=85.8MiB/s (89.9MB/s), 28.6MiB/s-28.6MiB/s (30.0MB/s-30.0MB/s), io=858MiB (900MB), run=10003-10007msec 00:21:52.072 08:32:52 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:52.072 08:32:52 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:52.072 08:32:52 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:52.072 08:32:52 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:52.072 08:32:52 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:52.072 08:32:52 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:52.072 08:32:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.072 08:32:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:52.072 08:32:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.072 08:32:52 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:52.072 08:32:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.072 08:32:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:52.072 08:32:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.072 00:21:52.072 real 0m11.148s 00:21:52.072 user 0m27.767s 00:21:52.072 sys 0m3.090s 00:21:52.072 08:32:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:52.072 08:32:52 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:52.072 ************************************ 00:21:52.072 END TEST fio_dif_digest 00:21:52.072 ************************************ 00:21:52.072 08:32:52 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:52.072 08:32:52 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:52.072 08:32:52 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:52.072 08:32:52 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:21:52.072 08:32:52 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:52.072 08:32:52 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:21:52.072 08:32:52 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:52.072 08:32:52 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:52.072 rmmod nvme_tcp 00:21:52.072 rmmod nvme_fabrics 00:21:52.072 rmmod nvme_keyring 00:21:52.072 08:32:52 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:52.072 08:32:52 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:21:52.072 08:32:52 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:21:52.072 08:32:52 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 83274 ']' 00:21:52.072 08:32:52 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 83274 00:21:52.072 08:32:52 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 83274 ']' 00:21:52.072 08:32:52 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 83274 00:21:52.072 08:32:52 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:21:52.072 08:32:52 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:52.072 08:32:52 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83274 00:21:52.072 killing process with pid 83274 00:21:52.072 08:32:52 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:52.072 08:32:52 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:52.072 08:32:52 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83274' 00:21:52.072 08:32:52 nvmf_dif -- common/autotest_common.sh@969 -- # kill 83274 00:21:52.072 08:32:52 nvmf_dif -- common/autotest_common.sh@974 -- # wait 83274 00:21:52.072 08:32:52 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:21:52.072 08:32:52 nvmf_dif -- nvmf/common.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:52.072 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:52.072 Waiting for block devices as requested 00:21:52.072 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:52.072 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.072 08:32:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:52.072 08:32:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.072 08:32:53 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:21:52.072 00:21:52.072 real 1m1.120s 00:21:52.072 user 3m48.058s 00:21:52.072 sys 0m21.348s 00:21:52.072 08:32:53 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:52.072 ************************************ 00:21:52.072 END TEST nvmf_dif 00:21:52.072 ************************************ 00:21:52.072 08:32:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:52.072 08:32:53 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:52.072 08:32:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:52.072 08:32:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:52.072 08:32:53 -- common/autotest_common.sh@10 -- # set +x 00:21:52.072 ************************************ 00:21:52.072 START TEST nvmf_abort_qd_sizes 00:21:52.072 ************************************ 00:21:52.072 08:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:52.072 * Looking for test storage... 00:21:52.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:52.072 08:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:52.072 08:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:21:52.072 08:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:52.331 08:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:52.331 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:52.331 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:52.331 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:52.331 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:21:52.331 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:21:52.331 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:21:52.331 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:21:52.331 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:52.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.332 --rc genhtml_branch_coverage=1 00:21:52.332 --rc genhtml_function_coverage=1 00:21:52.332 --rc genhtml_legend=1 00:21:52.332 --rc geninfo_all_blocks=1 00:21:52.332 --rc geninfo_unexecuted_blocks=1 00:21:52.332 00:21:52.332 ' 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:52.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.332 --rc genhtml_branch_coverage=1 00:21:52.332 --rc genhtml_function_coverage=1 00:21:52.332 --rc genhtml_legend=1 00:21:52.332 --rc geninfo_all_blocks=1 00:21:52.332 --rc geninfo_unexecuted_blocks=1 00:21:52.332 00:21:52.332 ' 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:52.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.332 --rc genhtml_branch_coverage=1 00:21:52.332 --rc genhtml_function_coverage=1 00:21:52.332 --rc genhtml_legend=1 00:21:52.332 --rc geninfo_all_blocks=1 00:21:52.332 --rc geninfo_unexecuted_blocks=1 00:21:52.332 00:21:52.332 ' 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:52.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.332 --rc genhtml_branch_coverage=1 00:21:52.332 --rc genhtml_function_coverage=1 00:21:52.332 --rc genhtml_legend=1 00:21:52.332 --rc geninfo_all_blocks=1 00:21:52.332 --rc geninfo_unexecuted_blocks=1 00:21:52.332 00:21:52.332 ' 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:52.332 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ virt != virt ]] 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ no == yes ]] 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@449 -- # [[ virt == phy ]] 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@452 -- # [[ virt == phy-fallback ]] 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@457 -- # [[ tcp == tcp ]] 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@458 -- # nvmf_veth_init 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:52.332 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:52.332 Cannot find device "nvmf_init_br" 00:21:52.333 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:52.333 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:52.333 Cannot find device "nvmf_init_br2" 00:21:52.333 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:52.333 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:52.333 Cannot find device "nvmf_tgt_br" 00:21:52.333 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:21:52.333 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:52.333 Cannot find device "nvmf_tgt_br2" 00:21:52.333 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:21:52.333 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:52.333 Cannot find device "nvmf_init_br" 00:21:52.333 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:21:52.333 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:52.333 Cannot find device "nvmf_init_br2" 00:21:52.333 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:21:52.333 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:52.333 Cannot find device "nvmf_tgt_br" 00:21:52.333 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:21:52.333 08:32:53 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:52.333 Cannot find device "nvmf_tgt_br2" 00:21:52.333 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:21:52.333 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:52.333 Cannot find device "nvmf_br" 00:21:52.333 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:21:52.333 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:52.333 Cannot find device "nvmf_init_if" 00:21:52.333 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:21:52.333 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:52.333 Cannot find device "nvmf_init_if2" 00:21:52.333 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:21:52.333 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:52.333 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:52.333 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:21:52.333 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:52.333 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:52.333 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:21:52.333 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:52.333 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:52.591 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:52.591 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:52.592 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:52.592 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:21:52.592 00:21:52.592 --- 10.0.0.3 ping statistics --- 00:21:52.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.592 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:52.592 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:52.592 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:21:52.592 00:21:52.592 --- 10.0.0.4 ping statistics --- 00:21:52.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.592 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:52.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:21:52.592 00:21:52.592 --- 10.0.0.1 ping statistics --- 00:21:52.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.592 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:52.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:21:52.592 00:21:52.592 --- 10.0.0.2 ping statistics --- 00:21:52.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.592 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # return 0 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:21:52.592 08:32:54 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:53.528 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:53.528 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:53.528 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:53.528 08:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:53.528 08:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:53.528 08:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:53.528 08:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:53.528 08:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:53.528 08:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:53.528 08:32:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:53.528 08:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:53.528 08:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:53.528 08:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:53.528 08:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=84705 00:21:53.528 08:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:53.528 08:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 84705 00:21:53.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.528 08:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 84705 ']' 00:21:53.528 08:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.528 08:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:53.528 08:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.528 08:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:53.528 08:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:53.528 [2024-10-15 08:32:55.234360] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:21:53.528 [2024-10-15 08:32:55.234620] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:53.787 [2024-10-15 08:32:55.380697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:53.787 [2024-10-15 08:32:55.445764] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:53.787 [2024-10-15 08:32:55.446082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:53.787 [2024-10-15 08:32:55.446108] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:53.787 [2024-10-15 08:32:55.446141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:53.787 [2024-10-15 08:32:55.446152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:53.787 [2024-10-15 08:32:55.447710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.787 [2024-10-15 08:32:55.447838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:53.787 [2024-10-15 08:32:55.448268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:53.787 [2024-10-15 08:32:55.448278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.046 [2024-10-15 08:32:55.523153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:54.046 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:54.047 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:21:54.047 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:54.047 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:54.047 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:54.047 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:54.047 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:54.047 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:54.047 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:54.047 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:54.047 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:54.047 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:54.047 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:21:54.047 08:32:55 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:54.047 08:32:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:54.047 08:32:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:54.047 08:32:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:54.047 08:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:54.047 08:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:54.047 08:32:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:54.047 ************************************ 00:21:54.047 START TEST spdk_target_abort 00:21:54.047 ************************************ 00:21:54.047 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:21:54.047 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:54.047 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:54.047 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.047 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:54.047 spdk_targetn1 00:21:54.047 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.047 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:54.047 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.047 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:54.047 [2024-10-15 08:32:55.770693] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:54.306 [2024-10-15 08:32:55.803072] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:54.306 08:32:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:57.591 Initializing NVMe Controllers 00:21:57.591 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:57.591 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:57.591 Initialization complete. Launching workers. 00:21:57.591 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9749, failed: 0 00:21:57.591 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1082, failed to submit 8667 00:21:57.591 success 874, unsuccessful 208, failed 0 00:21:57.591 08:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:57.592 08:32:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:00.879 Initializing NVMe Controllers 00:22:00.879 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:22:00.879 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:00.879 Initialization complete. Launching workers. 00:22:00.879 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9000, failed: 0 00:22:00.879 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1157, failed to submit 7843 00:22:00.879 success 403, unsuccessful 754, failed 0 00:22:00.879 08:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:00.879 08:33:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:04.168 Initializing NVMe Controllers 00:22:04.168 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:22:04.168 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:04.168 Initialization complete. Launching workers. 00:22:04.168 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31966, failed: 0 00:22:04.169 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2299, failed to submit 29667 00:22:04.169 success 501, unsuccessful 1798, failed 0 00:22:04.169 08:33:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:22:04.169 08:33:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.169 08:33:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:04.169 08:33:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.169 08:33:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:22:04.169 08:33:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.169 08:33:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:04.736 08:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.736 08:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84705 00:22:04.736 08:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 84705 ']' 00:22:04.736 08:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 84705 00:22:04.736 08:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:22:04.736 08:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:04.736 08:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84705 00:22:04.736 08:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:04.736 08:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:04.736 killing process with pid 84705 00:22:04.736 08:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84705' 00:22:04.736 08:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 84705 00:22:04.736 08:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 84705 00:22:04.995 ************************************ 00:22:04.995 END TEST spdk_target_abort 00:22:04.995 ************************************ 00:22:04.995 00:22:04.995 real 0m10.788s 00:22:04.995 user 0m41.093s 00:22:04.995 sys 0m2.216s 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:04.995 08:33:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:22:04.995 08:33:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:04.995 08:33:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:04.995 08:33:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:04.995 ************************************ 00:22:04.995 START TEST kernel_target_abort 00:22:04.995 ************************************ 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:04.995 08:33:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:05.254 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:05.254 Waiting for block devices as requested 00:22:05.254 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:05.517 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:05.517 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:22:05.517 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:05.517 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:22:05.517 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:22:05.517 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:05.517 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:05.517 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:22:05.517 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:22:05.517 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:22:05.517 No valid GPT data, bailing 00:22:05.517 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:05.517 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:05.517 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:05.517 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:22:05.517 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:22:05.517 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n2 ]] 00:22:05.517 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n2 00:22:05.517 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:22:05.517 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:05.518 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:05.518 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n2 00:22:05.518 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:22:05.518 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:22:05.518 No valid GPT data, bailing 00:22:05.518 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:22:05.518 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:05.518 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:05.518 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n2 00:22:05.518 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:22:05.518 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n3 ]] 00:22:05.518 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n3 00:22:05.518 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:22:05.518 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:05.518 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:05.518 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n3 00:22:05.518 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:22:05.518 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:22:05.776 No valid GPT data, bailing 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n3 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme1n1 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme1n1 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:22:05.776 No valid GPT data, bailing 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme1n1 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme1n1 ]] 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme1n1 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:22:05.776 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:05.777 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 --hostid=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 -a 10.0.0.1 -t tcp -s 4420 00:22:05.777 00:22:05.777 Discovery Log Number of Records 2, Generation counter 2 00:22:05.777 =====Discovery Log Entry 0====== 00:22:05.777 trtype: tcp 00:22:05.777 adrfam: ipv4 00:22:05.777 subtype: current discovery subsystem 00:22:05.777 treq: not specified, sq flow control disable supported 00:22:05.777 portid: 1 00:22:05.777 trsvcid: 4420 00:22:05.777 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:05.777 traddr: 10.0.0.1 00:22:05.777 eflags: none 00:22:05.777 sectype: none 00:22:05.777 =====Discovery Log Entry 1====== 00:22:05.777 trtype: tcp 00:22:05.777 adrfam: ipv4 00:22:05.777 subtype: nvme subsystem 00:22:05.777 treq: not specified, sq flow control disable supported 00:22:05.777 portid: 1 00:22:05.777 trsvcid: 4420 00:22:05.777 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:05.777 traddr: 10.0.0.1 00:22:05.777 eflags: none 00:22:05.777 sectype: none 00:22:05.777 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:22:05.777 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:05.777 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:05.777 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:22:05.777 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:05.777 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:05.777 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:05.777 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:05.777 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:05.777 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:05.777 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:05.777 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:05.777 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:05.777 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:05.777 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:22:05.777 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:05.777 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:22:05.777 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:05.777 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:05.777 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:05.777 08:33:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:09.063 Initializing NVMe Controllers 00:22:09.063 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:09.063 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:09.063 Initialization complete. Launching workers. 00:22:09.063 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33926, failed: 0 00:22:09.063 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33926, failed to submit 0 00:22:09.063 success 0, unsuccessful 33926, failed 0 00:22:09.063 08:33:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:09.063 08:33:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:12.350 Initializing NVMe Controllers 00:22:12.350 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:12.350 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:12.350 Initialization complete. Launching workers. 00:22:12.350 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65028, failed: 0 00:22:12.350 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26734, failed to submit 38294 00:22:12.350 success 0, unsuccessful 26734, failed 0 00:22:12.350 08:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:12.350 08:33:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:15.654 Initializing NVMe Controllers 00:22:15.654 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:15.654 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:15.654 Initialization complete. Launching workers. 00:22:15.654 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 73731, failed: 0 00:22:15.654 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18430, failed to submit 55301 00:22:15.654 success 0, unsuccessful 18430, failed 0 00:22:15.654 08:33:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:22:15.654 08:33:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:15.654 08:33:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:22:15.654 08:33:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:15.654 08:33:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:15.654 08:33:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:15.654 08:33:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:15.654 08:33:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:22:15.654 08:33:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:22:15.654 08:33:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:16.221 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:17.598 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:17.599 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:17.858 00:22:17.858 real 0m12.815s 00:22:17.858 user 0m5.741s 00:22:17.858 sys 0m4.478s 00:22:17.858 ************************************ 00:22:17.858 END TEST kernel_target_abort 00:22:17.858 ************************************ 00:22:17.858 08:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:17.858 08:33:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:17.858 08:33:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:17.858 08:33:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:22:17.858 08:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:17.858 08:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:22:17.858 08:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:17.858 08:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:22:17.858 08:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:17.858 08:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:17.858 rmmod nvme_tcp 00:22:17.858 rmmod nvme_fabrics 00:22:17.858 rmmod nvme_keyring 00:22:17.858 08:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:17.858 08:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:22:17.858 08:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:22:17.858 08:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 84705 ']' 00:22:17.858 Process with pid 84705 is not found 00:22:17.858 08:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 84705 00:22:17.858 08:33:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 84705 ']' 00:22:17.858 08:33:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 84705 00:22:17.858 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (84705) - No such process 00:22:17.858 08:33:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 84705 is not found' 00:22:17.858 08:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:22:17.858 08:33:19 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:18.426 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:18.426 Waiting for block devices as requested 00:22:18.426 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:18.426 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:18.426 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:18.426 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:18.426 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:22:18.426 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:22:18.426 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:22:18.426 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:18.426 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:18.426 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:18.426 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:18.685 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:18.685 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:18.685 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:18.685 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:18.685 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:18.685 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:18.685 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:18.685 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:18.685 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:18.685 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:18.685 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:18.685 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:18.685 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:18.685 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.685 08:33:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:18.685 08:33:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.685 08:33:20 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:22:18.685 00:22:18.685 real 0m26.707s 00:22:18.685 user 0m48.010s 00:22:18.685 sys 0m8.153s 00:22:18.685 08:33:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:18.685 08:33:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:18.685 ************************************ 00:22:18.685 END TEST nvmf_abort_qd_sizes 00:22:18.685 ************************************ 00:22:18.945 08:33:20 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:18.945 08:33:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:18.945 08:33:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:18.945 08:33:20 -- common/autotest_common.sh@10 -- # set +x 00:22:18.945 ************************************ 00:22:18.945 START TEST keyring_file 00:22:18.945 ************************************ 00:22:18.945 08:33:20 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:18.945 * Looking for test storage... 00:22:18.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:18.945 08:33:20 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:18.945 08:33:20 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:22:18.945 08:33:20 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:18.945 08:33:20 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@345 -- # : 1 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@353 -- # local d=1 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@355 -- # echo 1 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@353 -- # local d=2 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@355 -- # echo 2 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@368 -- # return 0 00:22:18.945 08:33:20 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:18.945 08:33:20 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:18.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.945 --rc genhtml_branch_coverage=1 00:22:18.945 --rc genhtml_function_coverage=1 00:22:18.945 --rc genhtml_legend=1 00:22:18.945 --rc geninfo_all_blocks=1 00:22:18.945 --rc geninfo_unexecuted_blocks=1 00:22:18.945 00:22:18.945 ' 00:22:18.945 08:33:20 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:18.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.945 --rc genhtml_branch_coverage=1 00:22:18.945 --rc genhtml_function_coverage=1 00:22:18.945 --rc genhtml_legend=1 00:22:18.945 --rc geninfo_all_blocks=1 00:22:18.945 --rc geninfo_unexecuted_blocks=1 00:22:18.945 00:22:18.945 ' 00:22:18.945 08:33:20 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:18.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.945 --rc genhtml_branch_coverage=1 00:22:18.945 --rc genhtml_function_coverage=1 00:22:18.945 --rc genhtml_legend=1 00:22:18.945 --rc geninfo_all_blocks=1 00:22:18.945 --rc geninfo_unexecuted_blocks=1 00:22:18.945 00:22:18.945 ' 00:22:18.945 08:33:20 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:18.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:18.945 --rc genhtml_branch_coverage=1 00:22:18.945 --rc genhtml_function_coverage=1 00:22:18.945 --rc genhtml_legend=1 00:22:18.945 --rc geninfo_all_blocks=1 00:22:18.945 --rc geninfo_unexecuted_blocks=1 00:22:18.945 00:22:18.945 ' 00:22:18.945 08:33:20 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:18.945 08:33:20 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:18.945 08:33:20 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:22:18.945 08:33:20 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:18.945 08:33:20 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:18.945 08:33:20 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:18.945 08:33:20 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:18.945 08:33:20 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:18.945 08:33:20 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:18.945 08:33:20 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:18.945 08:33:20 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:18.945 08:33:20 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:18.945 08:33:20 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:18.945 08:33:20 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:22:18.945 08:33:20 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:22:18.945 08:33:20 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:18.945 08:33:20 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:18.945 08:33:20 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:18.945 08:33:20 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:18.945 08:33:20 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:18.945 08:33:20 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:18.946 08:33:20 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:18.946 08:33:20 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.946 08:33:20 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.946 08:33:20 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.946 08:33:20 keyring_file -- paths/export.sh@5 -- # export PATH 00:22:18.946 08:33:20 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.946 08:33:20 keyring_file -- nvmf/common.sh@51 -- # : 0 00:22:18.946 08:33:20 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:18.946 08:33:20 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:18.946 08:33:20 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:18.946 08:33:20 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:18.946 08:33:20 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:18.946 08:33:20 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:18.946 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:18.946 08:33:20 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:18.946 08:33:20 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:18.946 08:33:20 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:18.946 08:33:20 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:18.946 08:33:20 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:18.946 08:33:20 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:18.946 08:33:20 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:22:18.946 08:33:20 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:22:18.946 08:33:20 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:22:18.946 08:33:20 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:18.946 08:33:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:18.946 08:33:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:18.946 08:33:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:18.946 08:33:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:18.946 08:33:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:18.946 08:33:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XqQMm3T7Nq 00:22:18.946 08:33:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:18.946 08:33:20 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:18.946 08:33:20 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:22:18.946 08:33:20 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:22:18.946 08:33:20 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:22:18.946 08:33:20 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:22:18.946 08:33:20 keyring_file -- nvmf/common.sh@731 -- # python - 00:22:19.205 08:33:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XqQMm3T7Nq 00:22:19.205 08:33:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XqQMm3T7Nq 00:22:19.205 08:33:20 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.XqQMm3T7Nq 00:22:19.205 08:33:20 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:22:19.205 08:33:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:19.205 08:33:20 keyring_file -- keyring/common.sh@17 -- # name=key1 00:22:19.205 08:33:20 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:19.205 08:33:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:19.205 08:33:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:19.205 08:33:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uDeSZz8h7S 00:22:19.205 08:33:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:19.205 08:33:20 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:19.205 08:33:20 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:22:19.205 08:33:20 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:22:19.205 08:33:20 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:22:19.205 08:33:20 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:22:19.205 08:33:20 keyring_file -- nvmf/common.sh@731 -- # python - 00:22:19.205 08:33:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uDeSZz8h7S 00:22:19.205 08:33:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uDeSZz8h7S 00:22:19.205 08:33:20 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.uDeSZz8h7S 00:22:19.205 08:33:20 keyring_file -- keyring/file.sh@30 -- # tgtpid=85606 00:22:19.205 08:33:20 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:19.205 08:33:20 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85606 00:22:19.205 08:33:20 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 85606 ']' 00:22:19.205 08:33:20 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.205 08:33:20 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:19.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.205 08:33:20 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.205 08:33:20 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:19.205 08:33:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:19.205 [2024-10-15 08:33:20.849591] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:22:19.205 [2024-10-15 08:33:20.850332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85606 ] 00:22:19.464 [2024-10-15 08:33:20.998847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.464 [2024-10-15 08:33:21.065769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.464 [2024-10-15 08:33:21.164414] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:19.724 08:33:21 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:19.724 08:33:21 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:22:19.724 08:33:21 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:22:19.724 08:33:21 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.724 08:33:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:19.724 [2024-10-15 08:33:21.442388] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.983 null0 00:22:19.983 [2024-10-15 08:33:21.474380] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:19.983 [2024-10-15 08:33:21.474617] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:19.983 08:33:21 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.983 08:33:21 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:19.983 08:33:21 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:22:19.983 08:33:21 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:19.983 08:33:21 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:19.983 08:33:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.983 08:33:21 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:19.983 08:33:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.983 08:33:21 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:19.983 08:33:21 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.983 08:33:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:19.983 [2024-10-15 08:33:21.506377] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:22:19.983 request: 00:22:19.983 { 00:22:19.983 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:22:19.983 "secure_channel": false, 00:22:19.983 "listen_address": { 00:22:19.983 "trtype": "tcp", 00:22:19.983 "traddr": "127.0.0.1", 00:22:19.983 "trsvcid": "4420" 00:22:19.983 }, 00:22:19.983 "method": "nvmf_subsystem_add_listener", 00:22:19.983 "req_id": 1 00:22:19.983 } 00:22:19.983 Got JSON-RPC error response 00:22:19.983 response: 00:22:19.983 { 00:22:19.983 "code": -32602, 00:22:19.983 "message": "Invalid parameters" 00:22:19.983 } 00:22:19.983 08:33:21 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:19.983 08:33:21 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:22:19.983 08:33:21 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:19.983 08:33:21 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:19.983 08:33:21 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:19.983 08:33:21 keyring_file -- keyring/file.sh@47 -- # bperfpid=85620 00:22:19.983 08:33:21 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:22:19.983 08:33:21 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85620 /var/tmp/bperf.sock 00:22:19.983 08:33:21 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 85620 ']' 00:22:19.983 08:33:21 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:19.983 08:33:21 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:19.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:19.983 08:33:21 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:19.983 08:33:21 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:19.983 08:33:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:19.983 [2024-10-15 08:33:21.576154] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:22:19.983 [2024-10-15 08:33:21.576248] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85620 ] 00:22:20.242 [2024-10-15 08:33:21.713692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.242 [2024-10-15 08:33:21.789035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.242 [2024-10-15 08:33:21.865405] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:20.242 08:33:21 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:20.242 08:33:21 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:22:20.242 08:33:21 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XqQMm3T7Nq 00:22:20.242 08:33:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XqQMm3T7Nq 00:22:20.502 08:33:22 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uDeSZz8h7S 00:22:20.502 08:33:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uDeSZz8h7S 00:22:20.760 08:33:22 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:22:20.760 08:33:22 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:22:20.760 08:33:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:20.760 08:33:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:20.760 08:33:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:21.019 08:33:22 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.XqQMm3T7Nq == \/\t\m\p\/\t\m\p\.\X\q\Q\M\m\3\T\7\N\q ]] 00:22:21.019 08:33:22 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:22:21.019 08:33:22 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:22:21.019 08:33:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:21.019 08:33:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:21.019 08:33:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:21.278 08:33:22 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.uDeSZz8h7S == \/\t\m\p\/\t\m\p\.\u\D\e\S\Z\z\8\h\7\S ]] 00:22:21.278 08:33:22 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:22:21.278 08:33:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:21.278 08:33:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:21.278 08:33:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:21.278 08:33:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:21.278 08:33:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:21.538 08:33:23 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:22:21.538 08:33:23 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:22:21.538 08:33:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:21.538 08:33:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:21.538 08:33:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:21.538 08:33:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:21.538 08:33:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:21.798 08:33:23 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:22:21.798 08:33:23 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:21.798 08:33:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:22.057 [2024-10-15 08:33:23.708269] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:22.057 nvme0n1 00:22:22.316 08:33:23 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:22:22.316 08:33:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:22.316 08:33:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:22.316 08:33:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:22.316 08:33:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:22.316 08:33:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:22.316 08:33:24 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:22:22.316 08:33:24 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:22:22.316 08:33:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:22.316 08:33:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:22.316 08:33:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:22.316 08:33:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:22.316 08:33:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:22.883 08:33:24 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:22:22.883 08:33:24 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:22.883 Running I/O for 1 seconds... 00:22:23.819 13547.00 IOPS, 52.92 MiB/s 00:22:23.819 Latency(us) 00:22:23.819 [2024-10-15T08:33:25.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.819 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:22:23.819 nvme0n1 : 1.01 13605.48 53.15 0.00 0.00 9386.88 3291.69 13941.29 00:22:23.819 [2024-10-15T08:33:25.550Z] =================================================================================================================== 00:22:23.819 [2024-10-15T08:33:25.550Z] Total : 13605.48 53.15 0.00 0.00 9386.88 3291.69 13941.29 00:22:23.819 { 00:22:23.819 "results": [ 00:22:23.819 { 00:22:23.819 "job": "nvme0n1", 00:22:23.819 "core_mask": "0x2", 00:22:23.819 "workload": "randrw", 00:22:23.819 "percentage": 50, 00:22:23.819 "status": "finished", 00:22:23.819 "queue_depth": 128, 00:22:23.819 "io_size": 4096, 00:22:23.819 "runtime": 1.005183, 00:22:23.819 "iops": 13605.482782737074, 00:22:23.819 "mibps": 53.146417120066694, 00:22:23.819 "io_failed": 0, 00:22:23.819 "io_timeout": 0, 00:22:23.819 "avg_latency_us": 9386.875592544338, 00:22:23.819 "min_latency_us": 3291.6945454545453, 00:22:23.819 "max_latency_us": 13941.294545454546 00:22:23.819 } 00:22:23.819 ], 00:22:23.819 "core_count": 1 00:22:23.819 } 00:22:23.819 08:33:25 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:23.819 08:33:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:24.079 08:33:25 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:22:24.079 08:33:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:24.079 08:33:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:24.079 08:33:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:24.079 08:33:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:24.079 08:33:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:24.338 08:33:26 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:22:24.338 08:33:26 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:22:24.338 08:33:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:24.338 08:33:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:24.338 08:33:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:24.339 08:33:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:24.339 08:33:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:24.598 08:33:26 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:22:24.598 08:33:26 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:24.598 08:33:26 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:22:24.598 08:33:26 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:24.598 08:33:26 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:22:24.598 08:33:26 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.598 08:33:26 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:22:24.598 08:33:26 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:24.598 08:33:26 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:24.598 08:33:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:24.857 [2024-10-15 08:33:26.550036] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:24.857 [2024-10-15 08:33:26.550696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33f30 (107): Transport endpoint is not connected 00:22:24.857 [2024-10-15 08:33:26.551684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc33f30 (9): Bad file descriptor 00:22:24.857 [2024-10-15 08:33:26.552680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:24.857 [2024-10-15 08:33:26.552715] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:24.857 [2024-10-15 08:33:26.552725] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:22:24.857 [2024-10-15 08:33:26.552736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:24.857 request: 00:22:24.857 { 00:22:24.857 "name": "nvme0", 00:22:24.857 "trtype": "tcp", 00:22:24.857 "traddr": "127.0.0.1", 00:22:24.857 "adrfam": "ipv4", 00:22:24.857 "trsvcid": "4420", 00:22:24.857 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:24.857 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:24.857 "prchk_reftag": false, 00:22:24.857 "prchk_guard": false, 00:22:24.857 "hdgst": false, 00:22:24.857 "ddgst": false, 00:22:24.857 "psk": "key1", 00:22:24.857 "allow_unrecognized_csi": false, 00:22:24.857 "method": "bdev_nvme_attach_controller", 00:22:24.857 "req_id": 1 00:22:24.857 } 00:22:24.857 Got JSON-RPC error response 00:22:24.857 response: 00:22:24.857 { 00:22:24.857 "code": -5, 00:22:24.857 "message": "Input/output error" 00:22:24.857 } 00:22:24.857 08:33:26 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:22:24.857 08:33:26 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:24.857 08:33:26 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:24.857 08:33:26 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:24.857 08:33:26 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:22:24.857 08:33:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:24.857 08:33:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:24.857 08:33:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:24.858 08:33:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:24.858 08:33:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:25.117 08:33:26 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:22:25.117 08:33:26 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:22:25.117 08:33:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:25.117 08:33:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:25.117 08:33:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:25.117 08:33:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:25.117 08:33:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:25.375 08:33:27 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:22:25.375 08:33:27 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:22:25.375 08:33:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:25.634 08:33:27 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:22:25.634 08:33:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:22:25.893 08:33:27 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:22:25.893 08:33:27 keyring_file -- keyring/file.sh@78 -- # jq length 00:22:25.893 08:33:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:26.152 08:33:27 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:22:26.152 08:33:27 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.XqQMm3T7Nq 00:22:26.152 08:33:27 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.XqQMm3T7Nq 00:22:26.152 08:33:27 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:22:26.152 08:33:27 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.XqQMm3T7Nq 00:22:26.152 08:33:27 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:22:26.152 08:33:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.152 08:33:27 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:22:26.152 08:33:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.152 08:33:27 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XqQMm3T7Nq 00:22:26.152 08:33:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XqQMm3T7Nq 00:22:26.415 [2024-10-15 08:33:27.979891] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.XqQMm3T7Nq': 0100660 00:22:26.415 [2024-10-15 08:33:27.979945] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:26.415 request: 00:22:26.415 { 00:22:26.415 "name": "key0", 00:22:26.415 "path": "/tmp/tmp.XqQMm3T7Nq", 00:22:26.415 "method": "keyring_file_add_key", 00:22:26.415 "req_id": 1 00:22:26.415 } 00:22:26.415 Got JSON-RPC error response 00:22:26.415 response: 00:22:26.415 { 00:22:26.415 "code": -1, 00:22:26.415 "message": "Operation not permitted" 00:22:26.415 } 00:22:26.415 08:33:27 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:22:26.415 08:33:27 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:26.415 08:33:27 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:26.415 08:33:27 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:26.415 08:33:27 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.XqQMm3T7Nq 00:22:26.415 08:33:27 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XqQMm3T7Nq 00:22:26.415 08:33:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XqQMm3T7Nq 00:22:26.676 08:33:28 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.XqQMm3T7Nq 00:22:26.676 08:33:28 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:22:26.676 08:33:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:26.676 08:33:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:26.676 08:33:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:26.676 08:33:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:26.676 08:33:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:26.935 08:33:28 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:22:26.935 08:33:28 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:26.935 08:33:28 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:22:26.935 08:33:28 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:26.935 08:33:28 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:22:26.935 08:33:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.935 08:33:28 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:22:26.935 08:33:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:26.935 08:33:28 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:26.935 08:33:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:27.194 [2024-10-15 08:33:28.704051] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.XqQMm3T7Nq': No such file or directory 00:22:27.194 [2024-10-15 08:33:28.704099] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:22:27.194 [2024-10-15 08:33:28.704117] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:22:27.194 [2024-10-15 08:33:28.704128] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:22:27.194 [2024-10-15 08:33:28.704149] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:27.194 [2024-10-15 08:33:28.704158] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:22:27.194 request: 00:22:27.194 { 00:22:27.194 "name": "nvme0", 00:22:27.194 "trtype": "tcp", 00:22:27.194 "traddr": "127.0.0.1", 00:22:27.194 "adrfam": "ipv4", 00:22:27.194 "trsvcid": "4420", 00:22:27.194 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:27.194 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:27.194 "prchk_reftag": false, 00:22:27.194 "prchk_guard": false, 00:22:27.194 "hdgst": false, 00:22:27.194 "ddgst": false, 00:22:27.194 "psk": "key0", 00:22:27.194 "allow_unrecognized_csi": false, 00:22:27.194 "method": "bdev_nvme_attach_controller", 00:22:27.194 "req_id": 1 00:22:27.194 } 00:22:27.194 Got JSON-RPC error response 00:22:27.194 response: 00:22:27.194 { 00:22:27.194 "code": -19, 00:22:27.194 "message": "No such device" 00:22:27.194 } 00:22:27.194 08:33:28 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:22:27.194 08:33:28 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:27.194 08:33:28 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:27.194 08:33:28 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:27.194 08:33:28 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:22:27.194 08:33:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:27.454 08:33:28 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:27.454 08:33:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:27.454 08:33:28 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:27.454 08:33:28 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:27.454 08:33:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:27.454 08:33:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:27.454 08:33:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.3cWvJQtzmw 00:22:27.454 08:33:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:27.454 08:33:28 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:27.454 08:33:28 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:22:27.454 08:33:28 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:22:27.454 08:33:28 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:22:27.454 08:33:28 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:22:27.454 08:33:28 keyring_file -- nvmf/common.sh@731 -- # python - 00:22:27.454 08:33:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.3cWvJQtzmw 00:22:27.454 08:33:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.3cWvJQtzmw 00:22:27.454 08:33:28 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.3cWvJQtzmw 00:22:27.454 08:33:29 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3cWvJQtzmw 00:22:27.454 08:33:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3cWvJQtzmw 00:22:27.713 08:33:29 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:27.713 08:33:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:27.972 nvme0n1 00:22:27.972 08:33:29 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:22:27.972 08:33:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:27.972 08:33:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:27.972 08:33:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:27.972 08:33:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:27.972 08:33:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:28.231 08:33:29 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:22:28.231 08:33:29 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:22:28.231 08:33:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:28.490 08:33:30 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:22:28.490 08:33:30 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:22:28.490 08:33:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:28.490 08:33:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:28.490 08:33:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:28.749 08:33:30 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:22:28.749 08:33:30 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:22:28.749 08:33:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:28.749 08:33:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:28.749 08:33:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:28.749 08:33:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:28.749 08:33:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:29.008 08:33:30 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:22:29.008 08:33:30 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:29.008 08:33:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:29.266 08:33:30 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:22:29.266 08:33:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:29.266 08:33:30 keyring_file -- keyring/file.sh@105 -- # jq length 00:22:29.525 08:33:31 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:22:29.525 08:33:31 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3cWvJQtzmw 00:22:29.525 08:33:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3cWvJQtzmw 00:22:29.784 08:33:31 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uDeSZz8h7S 00:22:29.784 08:33:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uDeSZz8h7S 00:22:30.043 08:33:31 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:30.043 08:33:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:30.301 nvme0n1 00:22:30.302 08:33:31 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:22:30.302 08:33:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:22:30.561 08:33:32 keyring_file -- keyring/file.sh@113 -- # config='{ 00:22:30.561 "subsystems": [ 00:22:30.561 { 00:22:30.561 "subsystem": "keyring", 00:22:30.561 "config": [ 00:22:30.561 { 00:22:30.561 "method": "keyring_file_add_key", 00:22:30.561 "params": { 00:22:30.561 "name": "key0", 00:22:30.561 "path": "/tmp/tmp.3cWvJQtzmw" 00:22:30.561 } 00:22:30.561 }, 00:22:30.561 { 00:22:30.561 "method": "keyring_file_add_key", 00:22:30.561 "params": { 00:22:30.561 "name": "key1", 00:22:30.561 "path": "/tmp/tmp.uDeSZz8h7S" 00:22:30.561 } 00:22:30.561 } 00:22:30.561 ] 00:22:30.561 }, 00:22:30.561 { 00:22:30.561 "subsystem": "iobuf", 00:22:30.561 "config": [ 00:22:30.561 { 00:22:30.561 "method": "iobuf_set_options", 00:22:30.561 "params": { 00:22:30.561 "small_pool_count": 8192, 00:22:30.561 "large_pool_count": 1024, 00:22:30.561 "small_bufsize": 8192, 00:22:30.561 "large_bufsize": 135168 00:22:30.561 } 00:22:30.561 } 00:22:30.561 ] 00:22:30.561 }, 00:22:30.561 { 00:22:30.561 "subsystem": "sock", 00:22:30.561 "config": [ 00:22:30.561 { 00:22:30.561 "method": "sock_set_default_impl", 00:22:30.561 "params": { 00:22:30.561 "impl_name": "uring" 00:22:30.561 } 00:22:30.561 }, 00:22:30.561 { 00:22:30.561 "method": "sock_impl_set_options", 00:22:30.561 "params": { 00:22:30.561 "impl_name": "ssl", 00:22:30.561 "recv_buf_size": 4096, 00:22:30.561 "send_buf_size": 4096, 00:22:30.561 "enable_recv_pipe": true, 00:22:30.561 "enable_quickack": false, 00:22:30.561 "enable_placement_id": 0, 00:22:30.561 "enable_zerocopy_send_server": true, 00:22:30.561 "enable_zerocopy_send_client": false, 00:22:30.561 "zerocopy_threshold": 0, 00:22:30.561 "tls_version": 0, 00:22:30.561 "enable_ktls": false 00:22:30.561 } 00:22:30.561 }, 00:22:30.561 { 00:22:30.561 "method": "sock_impl_set_options", 00:22:30.561 "params": { 00:22:30.561 "impl_name": "posix", 00:22:30.561 "recv_buf_size": 2097152, 00:22:30.561 "send_buf_size": 2097152, 00:22:30.561 "enable_recv_pipe": true, 00:22:30.561 "enable_quickack": false, 00:22:30.561 "enable_placement_id": 0, 00:22:30.561 "enable_zerocopy_send_server": true, 00:22:30.561 "enable_zerocopy_send_client": false, 00:22:30.561 "zerocopy_threshold": 0, 00:22:30.561 "tls_version": 0, 00:22:30.561 "enable_ktls": false 00:22:30.561 } 00:22:30.561 }, 00:22:30.561 { 00:22:30.561 "method": "sock_impl_set_options", 00:22:30.561 "params": { 00:22:30.561 "impl_name": "uring", 00:22:30.561 "recv_buf_size": 2097152, 00:22:30.561 "send_buf_size": 2097152, 00:22:30.561 "enable_recv_pipe": true, 00:22:30.561 "enable_quickack": false, 00:22:30.561 "enable_placement_id": 0, 00:22:30.561 "enable_zerocopy_send_server": false, 00:22:30.561 "enable_zerocopy_send_client": false, 00:22:30.561 "zerocopy_threshold": 0, 00:22:30.561 "tls_version": 0, 00:22:30.561 "enable_ktls": false 00:22:30.561 } 00:22:30.561 } 00:22:30.561 ] 00:22:30.561 }, 00:22:30.561 { 00:22:30.561 "subsystem": "vmd", 00:22:30.561 "config": [] 00:22:30.561 }, 00:22:30.561 { 00:22:30.561 "subsystem": "accel", 00:22:30.561 "config": [ 00:22:30.561 { 00:22:30.561 "method": "accel_set_options", 00:22:30.561 "params": { 00:22:30.561 "small_cache_size": 128, 00:22:30.561 "large_cache_size": 16, 00:22:30.561 "task_count": 2048, 00:22:30.561 "sequence_count": 2048, 00:22:30.561 "buf_count": 2048 00:22:30.561 } 00:22:30.561 } 00:22:30.561 ] 00:22:30.561 }, 00:22:30.561 { 00:22:30.561 "subsystem": "bdev", 00:22:30.561 "config": [ 00:22:30.561 { 00:22:30.561 "method": "bdev_set_options", 00:22:30.561 "params": { 00:22:30.561 "bdev_io_pool_size": 65535, 00:22:30.561 "bdev_io_cache_size": 256, 00:22:30.561 "bdev_auto_examine": true, 00:22:30.561 "iobuf_small_cache_size": 128, 00:22:30.561 "iobuf_large_cache_size": 16 00:22:30.561 } 00:22:30.561 }, 00:22:30.561 { 00:22:30.561 "method": "bdev_raid_set_options", 00:22:30.561 "params": { 00:22:30.561 "process_window_size_kb": 1024, 00:22:30.561 "process_max_bandwidth_mb_sec": 0 00:22:30.561 } 00:22:30.561 }, 00:22:30.561 { 00:22:30.561 "method": "bdev_iscsi_set_options", 00:22:30.561 "params": { 00:22:30.561 "timeout_sec": 30 00:22:30.561 } 00:22:30.561 }, 00:22:30.561 { 00:22:30.561 "method": "bdev_nvme_set_options", 00:22:30.561 "params": { 00:22:30.561 "action_on_timeout": "none", 00:22:30.561 "timeout_us": 0, 00:22:30.561 "timeout_admin_us": 0, 00:22:30.561 "keep_alive_timeout_ms": 10000, 00:22:30.561 "arbitration_burst": 0, 00:22:30.561 "low_priority_weight": 0, 00:22:30.561 "medium_priority_weight": 0, 00:22:30.561 "high_priority_weight": 0, 00:22:30.561 "nvme_adminq_poll_period_us": 10000, 00:22:30.561 "nvme_ioq_poll_period_us": 0, 00:22:30.561 "io_queue_requests": 512, 00:22:30.561 "delay_cmd_submit": true, 00:22:30.561 "transport_retry_count": 4, 00:22:30.561 "bdev_retry_count": 3, 00:22:30.561 "transport_ack_timeout": 0, 00:22:30.561 "ctrlr_loss_timeout_sec": 0, 00:22:30.561 "reconnect_delay_sec": 0, 00:22:30.561 "fast_io_fail_timeout_sec": 0, 00:22:30.561 "disable_auto_failback": false, 00:22:30.561 "generate_uuids": false, 00:22:30.561 "transport_tos": 0, 00:22:30.561 "nvme_error_stat": false, 00:22:30.561 "rdma_srq_size": 0, 00:22:30.561 "io_path_stat": false, 00:22:30.561 "allow_accel_sequence": false, 00:22:30.561 "rdma_max_cq_size": 0, 00:22:30.561 "rdma_cm_event_timeout_ms": 0, 00:22:30.561 "dhchap_digests": [ 00:22:30.561 "sha256", 00:22:30.561 "sha384", 00:22:30.561 "sha512" 00:22:30.561 ], 00:22:30.561 "dhchap_dhgroups": [ 00:22:30.561 "null", 00:22:30.561 "ffdhe2048", 00:22:30.562 "ffdhe3072", 00:22:30.562 "ffdhe4096", 00:22:30.562 "ffdhe6144", 00:22:30.562 "ffdhe8192" 00:22:30.562 ] 00:22:30.562 } 00:22:30.562 }, 00:22:30.562 { 00:22:30.562 "method": "bdev_nvme_attach_controller", 00:22:30.562 "params": { 00:22:30.562 "name": "nvme0", 00:22:30.562 "trtype": "TCP", 00:22:30.562 "adrfam": "IPv4", 00:22:30.562 "traddr": "127.0.0.1", 00:22:30.562 "trsvcid": "4420", 00:22:30.562 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:30.562 "prchk_reftag": false, 00:22:30.562 "prchk_guard": false, 00:22:30.562 "ctrlr_loss_timeout_sec": 0, 00:22:30.562 "reconnect_delay_sec": 0, 00:22:30.562 "fast_io_fail_timeout_sec": 0, 00:22:30.562 "psk": "key0", 00:22:30.562 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:30.562 "hdgst": false, 00:22:30.562 "ddgst": false, 00:22:30.562 "multipath": "multipath" 00:22:30.562 } 00:22:30.562 }, 00:22:30.562 { 00:22:30.562 "method": "bdev_nvme_set_hotplug", 00:22:30.562 "params": { 00:22:30.562 "period_us": 100000, 00:22:30.562 "enable": false 00:22:30.562 } 00:22:30.562 }, 00:22:30.562 { 00:22:30.562 "method": "bdev_wait_for_examine" 00:22:30.562 } 00:22:30.562 ] 00:22:30.562 }, 00:22:30.562 { 00:22:30.562 "subsystem": "nbd", 00:22:30.562 "config": [] 00:22:30.562 } 00:22:30.562 ] 00:22:30.562 }' 00:22:30.562 08:33:32 keyring_file -- keyring/file.sh@115 -- # killprocess 85620 00:22:30.562 08:33:32 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 85620 ']' 00:22:30.562 08:33:32 keyring_file -- common/autotest_common.sh@954 -- # kill -0 85620 00:22:30.562 08:33:32 keyring_file -- common/autotest_common.sh@955 -- # uname 00:22:30.562 08:33:32 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:30.562 08:33:32 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85620 00:22:30.562 killing process with pid 85620 00:22:30.562 Received shutdown signal, test time was about 1.000000 seconds 00:22:30.562 00:22:30.562 Latency(us) 00:22:30.562 [2024-10-15T08:33:32.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.562 [2024-10-15T08:33:32.293Z] =================================================================================================================== 00:22:30.562 [2024-10-15T08:33:32.293Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:30.562 08:33:32 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:30.562 08:33:32 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:30.562 08:33:32 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85620' 00:22:30.562 08:33:32 keyring_file -- common/autotest_common.sh@969 -- # kill 85620 00:22:30.562 08:33:32 keyring_file -- common/autotest_common.sh@974 -- # wait 85620 00:22:30.821 08:33:32 keyring_file -- keyring/file.sh@118 -- # bperfpid=85853 00:22:30.821 08:33:32 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:22:30.821 08:33:32 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85853 /var/tmp/bperf.sock 00:22:30.821 08:33:32 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 85853 ']' 00:22:30.821 08:33:32 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:30.821 08:33:32 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:30.821 08:33:32 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:22:30.821 "subsystems": [ 00:22:30.821 { 00:22:30.821 "subsystem": "keyring", 00:22:30.821 "config": [ 00:22:30.821 { 00:22:30.821 "method": "keyring_file_add_key", 00:22:30.821 "params": { 00:22:30.821 "name": "key0", 00:22:30.821 "path": "/tmp/tmp.3cWvJQtzmw" 00:22:30.821 } 00:22:30.821 }, 00:22:30.821 { 00:22:30.821 "method": "keyring_file_add_key", 00:22:30.821 "params": { 00:22:30.821 "name": "key1", 00:22:30.821 "path": "/tmp/tmp.uDeSZz8h7S" 00:22:30.821 } 00:22:30.821 } 00:22:30.821 ] 00:22:30.821 }, 00:22:30.821 { 00:22:30.821 "subsystem": "iobuf", 00:22:30.821 "config": [ 00:22:30.821 { 00:22:30.821 "method": "iobuf_set_options", 00:22:30.821 "params": { 00:22:30.821 "small_pool_count": 8192, 00:22:30.821 "large_pool_count": 1024, 00:22:30.821 "small_bufsize": 8192, 00:22:30.821 "large_bufsize": 135168 00:22:30.821 } 00:22:30.821 } 00:22:30.821 ] 00:22:30.821 }, 00:22:30.821 { 00:22:30.821 "subsystem": "sock", 00:22:30.821 "config": [ 00:22:30.821 { 00:22:30.821 "method": "sock_set_default_impl", 00:22:30.821 "params": { 00:22:30.821 "impl_name": "uring" 00:22:30.821 } 00:22:30.821 }, 00:22:30.821 { 00:22:30.821 "method": "sock_impl_set_options", 00:22:30.821 "params": { 00:22:30.821 "impl_name": "ssl", 00:22:30.821 "recv_buf_size": 4096, 00:22:30.821 "send_buf_size": 4096, 00:22:30.821 "enable_recv_pipe": true, 00:22:30.821 "enable_quickack": false, 00:22:30.821 "enable_placement_id": 0, 00:22:30.821 "enable_zerocopy_send_server": true, 00:22:30.821 "enable_zerocopy_send_client": false, 00:22:30.821 "zerocopy_threshold": 0, 00:22:30.821 "tls_version": 0, 00:22:30.821 "enable_ktls": false 00:22:30.821 } 00:22:30.821 }, 00:22:30.821 { 00:22:30.821 "method": "sock_impl_set_options", 00:22:30.821 "params": { 00:22:30.821 "impl_name": "posix", 00:22:30.821 "recv_buf_size": 2097152, 00:22:30.822 "send_buf_size": 2097152, 00:22:30.822 "enable_recv_pipe": true, 00:22:30.822 "enable_quickack": false, 00:22:30.822 "enable_placement_id": 0, 00:22:30.822 "enable_zerocopy_send_server": true, 00:22:30.822 "enable_zerocopy_send_client": false, 00:22:30.822 "zerocopy_threshold": 0, 00:22:30.822 "tls_version": 0, 00:22:30.822 "enable_ktls": false 00:22:30.822 } 00:22:30.822 }, 00:22:30.822 { 00:22:30.822 "method": "sock_impl_set_options", 00:22:30.822 "params": { 00:22:30.822 "impl_name": "uring", 00:22:30.822 "recv_buf_size": 2097152, 00:22:30.822 "send_buf_size": 2097152, 00:22:30.822 "enable_recv_pipe": true, 00:22:30.822 "enable_quickack": false, 00:22:30.822 "enable_placement_id": 0, 00:22:30.822 "enable_zerocopy_send_server": false, 00:22:30.822 "enable_zerocopy_send_client": false, 00:22:30.822 "zerocopy_threshold": 0, 00:22:30.822 "tls_version": 0, 00:22:30.822 "enable_ktls": false 00:22:30.822 } 00:22:30.822 } 00:22:30.822 ] 00:22:30.822 }, 00:22:30.822 { 00:22:30.822 "subsystem": "vmd", 00:22:30.822 "config": [] 00:22:30.822 }, 00:22:30.822 { 00:22:30.822 "subsystem": "accel", 00:22:30.822 "config": [ 00:22:30.822 { 00:22:30.822 "method": "accel_set_options", 00:22:30.822 "params": { 00:22:30.822 "small_cache_size": 128, 00:22:30.822 "large_cache_size": 16, 00:22:30.822 "task_count": 2048, 00:22:30.822 "sequence_count": 2048, 00:22:30.822 "buf_count": 2048 00:22:30.822 } 00:22:30.822 } 00:22:30.822 ] 00:22:30.822 }, 00:22:30.822 { 00:22:30.822 "subsystem": "bdev", 00:22:30.822 "config": [ 00:22:30.822 { 00:22:30.822 "method": "bdev_set_options", 00:22:30.822 "params": { 00:22:30.822 "bdev_io_pool_size": 65535, 00:22:30.822 "bdev_io_cache_size": 256, 00:22:30.822 "bdev_auto_examine": true, 00:22:30.822 "iobuf_small_cache_size": 128, 00:22:30.822 "iobuf_large_cache_size": 16 00:22:30.822 } 00:22:30.822 }, 00:22:30.822 { 00:22:30.822 "method": "bdev_raid_set_options", 00:22:30.822 "params": { 00:22:30.822 "process_window_size_kb": 1024, 00:22:30.822 "process_max_bandwidth_mb_sec": 0 00:22:30.822 } 00:22:30.822 }, 00:22:30.822 { 00:22:30.822 "method": "bdev_iscsi_set_options", 00:22:30.822 "params": { 00:22:30.822 "timeout_sec": 30 00:22:30.822 } 00:22:30.822 }, 00:22:30.822 { 00:22:30.822 "method": "bdev_nvme_set_options", 00:22:30.822 "params": { 00:22:30.822 "action_on_timeout": "none", 00:22:30.822 "timeout_us": 0, 00:22:30.822 "timeout_admin_us": 0, 00:22:30.822 "keep_alive_timeout_ms": 10000, 00:22:30.822 "arbitration_burst": 0, 00:22:30.822 "low_priority_weight": 0, 00:22:30.822 "medium_priority_weight": 0, 00:22:30.822 "high_priority_weight": 0, 00:22:30.822 "nvme_adminq_poll_period_us": 10000, 00:22:30.822 "nvme_ioq_poll_period_us": 0, 00:22:30.822 "io_queue_requests": 512, 00:22:30.822 "delay_cmd_submit": true, 00:22:30.822 "transport_retry_count": 4, 00:22:30.822 "bdev_retry_count": 3, 00:22:30.822 "transport_ack_timeout": 0, 00:22:30.822 "ctrlr_loss_timeout_sec": 0, 00:22:30.822 "reconnect_delay_sec": 0, 00:22:30.822 "fast_io_fail_timeout_sec": 0, 00:22:30.822 "disable_auto_failback": false, 00:22:30.822 "generate_uuids": false, 00:22:30.822 "transport_tos": 0, 00:22:30.822 "nvme_error_stat": false, 00:22:30.822 "rdma_srq_size": 0, 00:22:30.822 "io_path_stat": false, 00:22:30.822 "allow_accel_sequence": false, 00:22:30.822 "rdma_max_cq_size": 0, 00:22:30.822 "rdma_cm_event_timeout_ms": 0, 00:22:30.822 "dhchap_digests": [ 00:22:30.822 "sha256", 00:22:30.822 "sha384", 00:22:30.822 "sha512" 00:22:30.822 ], 00:22:30.822 "dhchap_dhgroups": [ 00:22:30.822 "null", 00:22:30.822 "ffdhe2048", 00:22:30.822 "ffdhe3072", 00:22:30.822 "ffdhe4096", 00:22:30.822 "ffdhe6144", 00:22:30.822 "ffdhe8192" 00:22:30.822 ] 00:22:30.822 } 00:22:30.822 }, 00:22:30.822 { 00:22:30.822 "method": "bdev_nvme_attach_controller", 00:22:30.822 "params": { 00:22:30.822 "name": "nvme0", 00:22:30.822 "trtype": "TCP", 00:22:30.822 "adrfam": "IPv4", 00:22:30.822 "traddr": "127.0.0.1", 00:22:30.822 "trsvcid": "4420", 00:22:30.822 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:30.822 "prchk_reftag": false, 00:22:30.822 "prchk_guard": false, 00:22:30.822 "ctrlr_loss_timeout_sec": 0, 00:22:30.822 "reconnect_delay_sec": 0, 00:22:30.822 "fast_io_fail_timeout_sec": 0, 00:22:30.822 "psk": "key0", 00:22:30.822 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:30.822 "hdgst": false, 00:22:30.822 "ddgst": false, 00:22:30.822 "multipath": "multipath" 00:22:30.822 } 00:22:30.822 }, 00:22:30.822 { 00:22:30.822 "method": "bdev_nvme_set_hotplug", 00:22:30.822 "params": { 00:22:30.822 "period_us": 100000, 00:22:30.822 "enable": false 00:22:30.822 } 00:22:30.822 }, 00:22:30.822 { 00:22:30.822 "method": "bdev_wait_for_examine" 00:22:30.822 } 00:22:30.822 ] 00:22:30.822 }, 00:22:30.822 { 00:22:30.822 "subsystem": "nbd", 00:22:30.822 "config": [] 00:22:30.822 } 00:22:30.822 ] 00:22:30.822 }' 00:22:30.822 08:33:32 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:30.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:30.822 08:33:32 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:30.822 08:33:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:30.822 [2024-10-15 08:33:32.511295] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:22:30.822 [2024-10-15 08:33:32.511406] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85853 ] 00:22:31.081 [2024-10-15 08:33:32.641395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.081 [2024-10-15 08:33:32.700530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.340 [2024-10-15 08:33:32.853059] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:31.340 [2024-10-15 08:33:32.917477] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:31.907 08:33:33 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:31.907 08:33:33 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:22:31.907 08:33:33 keyring_file -- keyring/file.sh@121 -- # jq length 00:22:31.907 08:33:33 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:22:31.907 08:33:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:32.166 08:33:33 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:22:32.166 08:33:33 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:22:32.166 08:33:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:32.166 08:33:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:32.166 08:33:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:32.166 08:33:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:32.166 08:33:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:32.424 08:33:34 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:22:32.424 08:33:34 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:22:32.424 08:33:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:32.424 08:33:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:32.424 08:33:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:32.424 08:33:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:32.424 08:33:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:32.683 08:33:34 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:22:32.683 08:33:34 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:22:32.683 08:33:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:22:32.683 08:33:34 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:22:32.942 08:33:34 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:22:32.942 08:33:34 keyring_file -- keyring/file.sh@1 -- # cleanup 00:22:32.942 08:33:34 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.3cWvJQtzmw /tmp/tmp.uDeSZz8h7S 00:22:32.942 08:33:34 keyring_file -- keyring/file.sh@20 -- # killprocess 85853 00:22:32.942 08:33:34 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 85853 ']' 00:22:32.942 08:33:34 keyring_file -- common/autotest_common.sh@954 -- # kill -0 85853 00:22:32.942 08:33:34 keyring_file -- common/autotest_common.sh@955 -- # uname 00:22:32.942 08:33:34 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:32.942 08:33:34 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85853 00:22:32.942 killing process with pid 85853 00:22:32.942 Received shutdown signal, test time was about 1.000000 seconds 00:22:32.942 00:22:32.942 Latency(us) 00:22:32.942 [2024-10-15T08:33:34.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.942 [2024-10-15T08:33:34.673Z] =================================================================================================================== 00:22:32.942 [2024-10-15T08:33:34.673Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:32.942 08:33:34 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:32.942 08:33:34 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:32.942 08:33:34 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85853' 00:22:32.942 08:33:34 keyring_file -- common/autotest_common.sh@969 -- # kill 85853 00:22:32.942 08:33:34 keyring_file -- common/autotest_common.sh@974 -- # wait 85853 00:22:33.200 08:33:34 keyring_file -- keyring/file.sh@21 -- # killprocess 85606 00:22:33.200 08:33:34 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 85606 ']' 00:22:33.200 08:33:34 keyring_file -- common/autotest_common.sh@954 -- # kill -0 85606 00:22:33.200 08:33:34 keyring_file -- common/autotest_common.sh@955 -- # uname 00:22:33.200 08:33:34 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:33.200 08:33:34 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85606 00:22:33.458 killing process with pid 85606 00:22:33.458 08:33:34 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:33.458 08:33:34 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:33.458 08:33:34 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85606' 00:22:33.458 08:33:34 keyring_file -- common/autotest_common.sh@969 -- # kill 85606 00:22:33.458 08:33:34 keyring_file -- common/autotest_common.sh@974 -- # wait 85606 00:22:34.027 00:22:34.027 real 0m15.008s 00:22:34.027 user 0m37.392s 00:22:34.027 sys 0m3.070s 00:22:34.027 08:33:35 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:34.027 08:33:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:34.027 ************************************ 00:22:34.027 END TEST keyring_file 00:22:34.027 ************************************ 00:22:34.027 08:33:35 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:22:34.027 08:33:35 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:34.027 08:33:35 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:34.027 08:33:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:34.027 08:33:35 -- common/autotest_common.sh@10 -- # set +x 00:22:34.027 ************************************ 00:22:34.027 START TEST keyring_linux 00:22:34.027 ************************************ 00:22:34.027 08:33:35 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:34.027 Joined session keyring: 1070705616 00:22:34.027 * Looking for test storage... 00:22:34.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:34.027 08:33:35 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:34.027 08:33:35 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:34.027 08:33:35 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:22:34.027 08:33:35 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@345 -- # : 1 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@368 -- # return 0 00:22:34.027 08:33:35 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.027 08:33:35 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:34.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.027 --rc genhtml_branch_coverage=1 00:22:34.027 --rc genhtml_function_coverage=1 00:22:34.027 --rc genhtml_legend=1 00:22:34.027 --rc geninfo_all_blocks=1 00:22:34.027 --rc geninfo_unexecuted_blocks=1 00:22:34.027 00:22:34.027 ' 00:22:34.027 08:33:35 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:34.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.027 --rc genhtml_branch_coverage=1 00:22:34.027 --rc genhtml_function_coverage=1 00:22:34.027 --rc genhtml_legend=1 00:22:34.027 --rc geninfo_all_blocks=1 00:22:34.027 --rc geninfo_unexecuted_blocks=1 00:22:34.027 00:22:34.027 ' 00:22:34.027 08:33:35 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:34.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.027 --rc genhtml_branch_coverage=1 00:22:34.027 --rc genhtml_function_coverage=1 00:22:34.027 --rc genhtml_legend=1 00:22:34.027 --rc geninfo_all_blocks=1 00:22:34.027 --rc geninfo_unexecuted_blocks=1 00:22:34.027 00:22:34.027 ' 00:22:34.027 08:33:35 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:34.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.027 --rc genhtml_branch_coverage=1 00:22:34.027 --rc genhtml_function_coverage=1 00:22:34.027 --rc genhtml_legend=1 00:22:34.027 --rc geninfo_all_blocks=1 00:22:34.027 --rc geninfo_unexecuted_blocks=1 00:22:34.027 00:22:34.027 ' 00:22:34.027 08:33:35 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:34.027 08:33:35 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:34.027 08:33:35 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:22:34.027 08:33:35 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.027 08:33:35 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.027 08:33:35 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.027 08:33:35 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.027 08:33:35 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.027 08:33:35 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.027 08:33:35 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.027 08:33:35 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.027 08:33:35 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.027 08:33:35 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.027 08:33:35 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:22:34.027 08:33:35 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=a506a3e3-6ffe-4288-9319-5f3dadc1f0c7 00:22:34.027 08:33:35 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.027 08:33:35 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.027 08:33:35 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:34.027 08:33:35 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.027 08:33:35 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.027 08:33:35 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.027 08:33:35 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.027 08:33:35 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.027 08:33:35 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.027 08:33:35 keyring_linux -- paths/export.sh@5 -- # export PATH 00:22:34.028 08:33:35 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.028 08:33:35 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:22:34.028 08:33:35 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:34.028 08:33:35 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:34.028 08:33:35 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.028 08:33:35 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.028 08:33:35 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.028 08:33:35 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:34.028 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:34.028 08:33:35 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:34.028 08:33:35 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:34.028 08:33:35 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:34.028 08:33:35 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:34.028 08:33:35 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:34.028 08:33:35 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:34.028 08:33:35 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:22:34.028 08:33:35 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:22:34.028 08:33:35 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:22:34.028 08:33:35 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:22:34.028 08:33:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:34.028 08:33:35 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:22:34.028 08:33:35 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:34.028 08:33:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:34.028 08:33:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:22:34.028 08:33:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:34.028 08:33:35 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:34.028 08:33:35 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:22:34.028 08:33:35 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:22:34.028 08:33:35 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:22:34.028 08:33:35 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:22:34.028 08:33:35 keyring_linux -- nvmf/common.sh@731 -- # python - 00:22:34.286 08:33:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:22:34.286 /tmp/:spdk-test:key0 00:22:34.286 08:33:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:22:34.286 08:33:35 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:22:34.286 08:33:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:34.286 08:33:35 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:22:34.286 08:33:35 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:34.286 08:33:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:34.286 08:33:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:22:34.286 08:33:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:34.286 08:33:35 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:34.286 08:33:35 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:22:34.286 08:33:35 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:22:34.286 08:33:35 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:22:34.286 08:33:35 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:22:34.286 08:33:35 keyring_linux -- nvmf/common.sh@731 -- # python - 00:22:34.286 08:33:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:22:34.286 /tmp/:spdk-test:key1 00:22:34.286 08:33:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:22:34.286 08:33:35 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85986 00:22:34.286 08:33:35 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:34.286 08:33:35 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85986 00:22:34.286 08:33:35 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 85986 ']' 00:22:34.286 08:33:35 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.286 08:33:35 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:34.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.286 08:33:35 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.286 08:33:35 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:34.286 08:33:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:34.286 [2024-10-15 08:33:35.870936] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:22:34.286 [2024-10-15 08:33:35.871049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85986 ] 00:22:34.286 [2024-10-15 08:33:36.007335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.545 [2024-10-15 08:33:36.061173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.545 [2024-10-15 08:33:36.151258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:34.803 08:33:36 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:34.803 08:33:36 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:22:34.803 08:33:36 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:22:34.803 08:33:36 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.803 08:33:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:34.803 [2024-10-15 08:33:36.394984] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.803 null0 00:22:34.803 [2024-10-15 08:33:36.426953] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:34.803 [2024-10-15 08:33:36.427198] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:34.803 08:33:36 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.803 08:33:36 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:22:34.803 840650937 00:22:34.803 08:33:36 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:22:34.803 733783562 00:22:34.803 08:33:36 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85995 00:22:34.803 08:33:36 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:22:34.803 08:33:36 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85995 /var/tmp/bperf.sock 00:22:34.803 08:33:36 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 85995 ']' 00:22:34.803 08:33:36 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:34.803 08:33:36 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:34.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:34.804 08:33:36 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:34.804 08:33:36 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:34.804 08:33:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:34.804 [2024-10-15 08:33:36.511826] Starting SPDK v25.01-pre git sha1 30f8ce7c5 / DPDK 24.03.0 initialization... 00:22:34.804 [2024-10-15 08:33:36.511934] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85995 ] 00:22:35.061 [2024-10-15 08:33:36.648749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.061 [2024-10-15 08:33:36.701835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.061 08:33:36 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:35.061 08:33:36 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:22:35.061 08:33:36 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:22:35.061 08:33:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:22:35.628 08:33:37 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:22:35.628 08:33:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:35.887 [2024-10-15 08:33:37.388874] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:35.887 08:33:37 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:35.887 08:33:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:36.146 [2024-10-15 08:33:37.680775] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:36.146 nvme0n1 00:22:36.146 08:33:37 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:22:36.146 08:33:37 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:22:36.146 08:33:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:36.146 08:33:37 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:36.146 08:33:37 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:36.146 08:33:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:36.404 08:33:38 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:22:36.404 08:33:38 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:36.404 08:33:38 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:22:36.404 08:33:38 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:22:36.404 08:33:38 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:36.404 08:33:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:36.404 08:33:38 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:22:36.671 08:33:38 keyring_linux -- keyring/linux.sh@25 -- # sn=840650937 00:22:36.671 08:33:38 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:22:36.671 08:33:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:36.671 08:33:38 keyring_linux -- keyring/linux.sh@26 -- # [[ 840650937 == \8\4\0\6\5\0\9\3\7 ]] 00:22:36.671 08:33:38 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 840650937 00:22:36.671 08:33:38 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:22:36.671 08:33:38 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:36.948 Running I/O for 1 seconds... 00:22:37.891 15621.00 IOPS, 61.02 MiB/s 00:22:37.891 Latency(us) 00:22:37.891 [2024-10-15T08:33:39.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.891 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:37.891 nvme0n1 : 1.01 15625.68 61.04 0.00 0.00 8153.57 6523.81 17515.99 00:22:37.891 [2024-10-15T08:33:39.622Z] =================================================================================================================== 00:22:37.891 [2024-10-15T08:33:39.622Z] Total : 15625.68 61.04 0.00 0.00 8153.57 6523.81 17515.99 00:22:37.891 { 00:22:37.891 "results": [ 00:22:37.891 { 00:22:37.891 "job": "nvme0n1", 00:22:37.891 "core_mask": "0x2", 00:22:37.891 "workload": "randread", 00:22:37.891 "status": "finished", 00:22:37.891 "queue_depth": 128, 00:22:37.891 "io_size": 4096, 00:22:37.891 "runtime": 1.00802, 00:22:37.891 "iops": 15625.68203011845, 00:22:37.891 "mibps": 61.0378204301502, 00:22:37.891 "io_failed": 0, 00:22:37.891 "io_timeout": 0, 00:22:37.891 "avg_latency_us": 8153.569784313839, 00:22:37.891 "min_latency_us": 6523.810909090909, 00:22:37.891 "max_latency_us": 17515.985454545455 00:22:37.891 } 00:22:37.891 ], 00:22:37.891 "core_count": 1 00:22:37.891 } 00:22:37.891 08:33:39 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:37.891 08:33:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:38.150 08:33:39 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:22:38.150 08:33:39 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:22:38.150 08:33:39 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:38.150 08:33:39 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:38.150 08:33:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:38.150 08:33:39 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:38.408 08:33:40 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:22:38.408 08:33:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:38.408 08:33:40 keyring_linux -- keyring/linux.sh@23 -- # return 00:22:38.408 08:33:40 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:38.408 08:33:40 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:22:38.408 08:33:40 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:38.408 08:33:40 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:22:38.408 08:33:40 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:38.408 08:33:40 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:22:38.408 08:33:40 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:38.408 08:33:40 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:38.408 08:33:40 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:38.667 [2024-10-15 08:33:40.319872] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:38.667 [2024-10-15 08:33:40.320232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2c590 (107): Transport endpoint is not connected 00:22:38.667 [2024-10-15 08:33:40.321207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2c590 (9): Bad file descriptor 00:22:38.667 [2024-10-15 08:33:40.322204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:38.667 [2024-10-15 08:33:40.322249] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:38.667 [2024-10-15 08:33:40.322259] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:22:38.667 [2024-10-15 08:33:40.322270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:38.667 request: 00:22:38.667 { 00:22:38.667 "name": "nvme0", 00:22:38.667 "trtype": "tcp", 00:22:38.667 "traddr": "127.0.0.1", 00:22:38.667 "adrfam": "ipv4", 00:22:38.667 "trsvcid": "4420", 00:22:38.667 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:38.667 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:38.667 "prchk_reftag": false, 00:22:38.667 "prchk_guard": false, 00:22:38.667 "hdgst": false, 00:22:38.667 "ddgst": false, 00:22:38.667 "psk": ":spdk-test:key1", 00:22:38.667 "allow_unrecognized_csi": false, 00:22:38.667 "method": "bdev_nvme_attach_controller", 00:22:38.667 "req_id": 1 00:22:38.667 } 00:22:38.667 Got JSON-RPC error response 00:22:38.667 response: 00:22:38.667 { 00:22:38.667 "code": -5, 00:22:38.667 "message": "Input/output error" 00:22:38.667 } 00:22:38.667 08:33:40 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:22:38.667 08:33:40 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:38.667 08:33:40 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:38.667 08:33:40 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:38.667 08:33:40 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:22:38.667 08:33:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:38.667 08:33:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:22:38.667 08:33:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:22:38.667 08:33:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:22:38.667 08:33:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:38.667 08:33:40 keyring_linux -- keyring/linux.sh@33 -- # sn=840650937 00:22:38.667 08:33:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 840650937 00:22:38.667 1 links removed 00:22:38.667 08:33:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:38.667 08:33:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:22:38.667 08:33:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:22:38.667 08:33:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:22:38.667 08:33:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:22:38.667 08:33:40 keyring_linux -- keyring/linux.sh@33 -- # sn=733783562 00:22:38.667 08:33:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 733783562 00:22:38.667 1 links removed 00:22:38.667 08:33:40 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85995 00:22:38.667 08:33:40 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 85995 ']' 00:22:38.667 08:33:40 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 85995 00:22:38.667 08:33:40 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:22:38.667 08:33:40 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:38.667 08:33:40 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85995 00:22:38.667 08:33:40 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:38.667 08:33:40 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:38.667 killing process with pid 85995 00:22:38.667 08:33:40 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85995' 00:22:38.667 08:33:40 keyring_linux -- common/autotest_common.sh@969 -- # kill 85995 00:22:38.667 Received shutdown signal, test time was about 1.000000 seconds 00:22:38.667 00:22:38.667 Latency(us) 00:22:38.667 [2024-10-15T08:33:40.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.667 [2024-10-15T08:33:40.398Z] =================================================================================================================== 00:22:38.667 [2024-10-15T08:33:40.398Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:38.667 08:33:40 keyring_linux -- common/autotest_common.sh@974 -- # wait 85995 00:22:38.926 08:33:40 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85986 00:22:38.926 08:33:40 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 85986 ']' 00:22:38.926 08:33:40 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 85986 00:22:38.927 08:33:40 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:22:38.927 08:33:40 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:38.927 08:33:40 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85986 00:22:39.186 08:33:40 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:39.186 08:33:40 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:39.186 killing process with pid 85986 00:22:39.186 08:33:40 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85986' 00:22:39.186 08:33:40 keyring_linux -- common/autotest_common.sh@969 -- # kill 85986 00:22:39.186 08:33:40 keyring_linux -- common/autotest_common.sh@974 -- # wait 85986 00:22:39.445 00:22:39.445 real 0m5.664s 00:22:39.445 user 0m10.877s 00:22:39.445 sys 0m1.683s 00:22:39.445 08:33:41 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:39.445 ************************************ 00:22:39.445 END TEST keyring_linux 00:22:39.445 08:33:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:39.445 ************************************ 00:22:39.704 08:33:41 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:22:39.704 08:33:41 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:39.704 08:33:41 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:39.704 08:33:41 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:22:39.704 08:33:41 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:22:39.704 08:33:41 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:22:39.704 08:33:41 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:39.704 08:33:41 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:39.704 08:33:41 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:39.704 08:33:41 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:22:39.704 08:33:41 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:22:39.704 08:33:41 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:22:39.704 08:33:41 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:22:39.704 08:33:41 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:22:39.704 08:33:41 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:22:39.704 08:33:41 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:22:39.704 08:33:41 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:22:39.704 08:33:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:39.704 08:33:41 -- common/autotest_common.sh@10 -- # set +x 00:22:39.704 08:33:41 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:22:39.704 08:33:41 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:22:39.704 08:33:41 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:22:39.704 08:33:41 -- common/autotest_common.sh@10 -- # set +x 00:22:41.610 INFO: APP EXITING 00:22:41.610 INFO: killing all VMs 00:22:41.610 INFO: killing vhost app 00:22:41.610 INFO: EXIT DONE 00:22:42.178 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:42.178 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:42.178 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:43.116 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:43.116 Cleaning 00:22:43.116 Removing: /var/run/dpdk/spdk0/config 00:22:43.116 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:43.116 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:43.116 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:43.116 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:43.116 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:43.116 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:43.116 Removing: /var/run/dpdk/spdk1/config 00:22:43.116 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:43.116 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:43.116 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:43.116 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:43.116 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:43.116 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:43.116 Removing: /var/run/dpdk/spdk2/config 00:22:43.116 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:43.116 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:43.116 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:43.116 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:43.116 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:43.116 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:43.116 Removing: /var/run/dpdk/spdk3/config 00:22:43.116 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:43.116 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:43.116 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:43.116 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:43.116 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:43.116 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:43.116 Removing: /var/run/dpdk/spdk4/config 00:22:43.116 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:43.116 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:43.116 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:43.116 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:43.116 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:43.116 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:43.116 Removing: /dev/shm/nvmf_trace.0 00:22:43.116 Removing: /dev/shm/spdk_tgt_trace.pid56877 00:22:43.116 Removing: /var/run/dpdk/spdk0 00:22:43.116 Removing: /var/run/dpdk/spdk1 00:22:43.116 Removing: /var/run/dpdk/spdk2 00:22:43.116 Removing: /var/run/dpdk/spdk3 00:22:43.116 Removing: /var/run/dpdk/spdk4 00:22:43.116 Removing: /var/run/dpdk/spdk_pid56719 00:22:43.116 Removing: /var/run/dpdk/spdk_pid56877 00:22:43.116 Removing: /var/run/dpdk/spdk_pid57089 00:22:43.116 Removing: /var/run/dpdk/spdk_pid57175 00:22:43.116 Removing: /var/run/dpdk/spdk_pid57203 00:22:43.116 Removing: /var/run/dpdk/spdk_pid57318 00:22:43.116 Removing: /var/run/dpdk/spdk_pid57336 00:22:43.116 Removing: /var/run/dpdk/spdk_pid57470 00:22:43.116 Removing: /var/run/dpdk/spdk_pid57676 00:22:43.116 Removing: /var/run/dpdk/spdk_pid57833 00:22:43.116 Removing: /var/run/dpdk/spdk_pid57918 00:22:43.116 Removing: /var/run/dpdk/spdk_pid58002 00:22:43.116 Removing: /var/run/dpdk/spdk_pid58099 00:22:43.116 Removing: /var/run/dpdk/spdk_pid58171 00:22:43.116 Removing: /var/run/dpdk/spdk_pid58215 00:22:43.116 Removing: /var/run/dpdk/spdk_pid58245 00:22:43.116 Removing: /var/run/dpdk/spdk_pid58320 00:22:43.116 Removing: /var/run/dpdk/spdk_pid58412 00:22:43.116 Removing: /var/run/dpdk/spdk_pid58867 00:22:43.116 Removing: /var/run/dpdk/spdk_pid58912 00:22:43.116 Removing: /var/run/dpdk/spdk_pid58963 00:22:43.116 Removing: /var/run/dpdk/spdk_pid58979 00:22:43.116 Removing: /var/run/dpdk/spdk_pid59051 00:22:43.116 Removing: /var/run/dpdk/spdk_pid59067 00:22:43.116 Removing: /var/run/dpdk/spdk_pid59140 00:22:43.116 Removing: /var/run/dpdk/spdk_pid59156 00:22:43.116 Removing: /var/run/dpdk/spdk_pid59207 00:22:43.116 Removing: /var/run/dpdk/spdk_pid59224 00:22:43.116 Removing: /var/run/dpdk/spdk_pid59265 00:22:43.116 Removing: /var/run/dpdk/spdk_pid59281 00:22:43.116 Removing: /var/run/dpdk/spdk_pid59417 00:22:43.116 Removing: /var/run/dpdk/spdk_pid59447 00:22:43.116 Removing: /var/run/dpdk/spdk_pid59535 00:22:43.116 Removing: /var/run/dpdk/spdk_pid59869 00:22:43.116 Removing: /var/run/dpdk/spdk_pid59881 00:22:43.116 Removing: /var/run/dpdk/spdk_pid59915 00:22:43.116 Removing: /var/run/dpdk/spdk_pid59933 00:22:43.116 Removing: /var/run/dpdk/spdk_pid59954 00:22:43.116 Removing: /var/run/dpdk/spdk_pid59973 00:22:43.116 Removing: /var/run/dpdk/spdk_pid59992 00:22:43.116 Removing: /var/run/dpdk/spdk_pid60012 00:22:43.116 Removing: /var/run/dpdk/spdk_pid60032 00:22:43.116 Removing: /var/run/dpdk/spdk_pid60047 00:22:43.117 Removing: /var/run/dpdk/spdk_pid60062 00:22:43.117 Removing: /var/run/dpdk/spdk_pid60087 00:22:43.117 Removing: /var/run/dpdk/spdk_pid60100 00:22:43.117 Removing: /var/run/dpdk/spdk_pid60121 00:22:43.117 Removing: /var/run/dpdk/spdk_pid60140 00:22:43.117 Removing: /var/run/dpdk/spdk_pid60159 00:22:43.117 Removing: /var/run/dpdk/spdk_pid60175 00:22:43.117 Removing: /var/run/dpdk/spdk_pid60199 00:22:43.117 Removing: /var/run/dpdk/spdk_pid60213 00:22:43.117 Removing: /var/run/dpdk/spdk_pid60228 00:22:43.117 Removing: /var/run/dpdk/spdk_pid60264 00:22:43.117 Removing: /var/run/dpdk/spdk_pid60283 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60313 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60385 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60419 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60428 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60457 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60472 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60478 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60522 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60535 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60568 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60579 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60593 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60598 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60613 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60623 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60632 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60647 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60680 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60702 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60717 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60751 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60755 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60768 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60814 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60820 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60853 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60866 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60868 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60881 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60894 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60896 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60909 00:22:43.376 Removing: /var/run/dpdk/spdk_pid60922 00:22:43.376 Removing: /var/run/dpdk/spdk_pid61006 00:22:43.376 Removing: /var/run/dpdk/spdk_pid61060 00:22:43.376 Removing: /var/run/dpdk/spdk_pid61179 00:22:43.376 Removing: /var/run/dpdk/spdk_pid61217 00:22:43.376 Removing: /var/run/dpdk/spdk_pid61262 00:22:43.376 Removing: /var/run/dpdk/spdk_pid61277 00:22:43.376 Removing: /var/run/dpdk/spdk_pid61299 00:22:43.376 Removing: /var/run/dpdk/spdk_pid61319 00:22:43.376 Removing: /var/run/dpdk/spdk_pid61356 00:22:43.376 Removing: /var/run/dpdk/spdk_pid61371 00:22:43.376 Removing: /var/run/dpdk/spdk_pid61455 00:22:43.376 Removing: /var/run/dpdk/spdk_pid61471 00:22:43.376 Removing: /var/run/dpdk/spdk_pid61526 00:22:43.376 Removing: /var/run/dpdk/spdk_pid61593 00:22:43.376 Removing: /var/run/dpdk/spdk_pid61665 00:22:43.376 Removing: /var/run/dpdk/spdk_pid61694 00:22:43.376 Removing: /var/run/dpdk/spdk_pid61799 00:22:43.376 Removing: /var/run/dpdk/spdk_pid61841 00:22:43.376 Removing: /var/run/dpdk/spdk_pid61879 00:22:43.376 Removing: /var/run/dpdk/spdk_pid62111 00:22:43.376 Removing: /var/run/dpdk/spdk_pid62210 00:22:43.376 Removing: /var/run/dpdk/spdk_pid62244 00:22:43.376 Removing: /var/run/dpdk/spdk_pid62268 00:22:43.376 Removing: /var/run/dpdk/spdk_pid62307 00:22:43.376 Removing: /var/run/dpdk/spdk_pid62340 00:22:43.376 Removing: /var/run/dpdk/spdk_pid62374 00:22:43.376 Removing: /var/run/dpdk/spdk_pid62411 00:22:43.376 Removing: /var/run/dpdk/spdk_pid62805 00:22:43.376 Removing: /var/run/dpdk/spdk_pid62845 00:22:43.376 Removing: /var/run/dpdk/spdk_pid63185 00:22:43.376 Removing: /var/run/dpdk/spdk_pid63667 00:22:43.376 Removing: /var/run/dpdk/spdk_pid63947 00:22:43.376 Removing: /var/run/dpdk/spdk_pid64856 00:22:43.376 Removing: /var/run/dpdk/spdk_pid65788 00:22:43.376 Removing: /var/run/dpdk/spdk_pid65905 00:22:43.376 Removing: /var/run/dpdk/spdk_pid65973 00:22:43.376 Removing: /var/run/dpdk/spdk_pid67389 00:22:43.376 Removing: /var/run/dpdk/spdk_pid67713 00:22:43.376 Removing: /var/run/dpdk/spdk_pid71547 00:22:43.376 Removing: /var/run/dpdk/spdk_pid71931 00:22:43.376 Removing: /var/run/dpdk/spdk_pid72040 00:22:43.376 Removing: /var/run/dpdk/spdk_pid72174 00:22:43.376 Removing: /var/run/dpdk/spdk_pid72198 00:22:43.376 Removing: /var/run/dpdk/spdk_pid72225 00:22:43.376 Removing: /var/run/dpdk/spdk_pid72259 00:22:43.376 Removing: /var/run/dpdk/spdk_pid72344 00:22:43.376 Removing: /var/run/dpdk/spdk_pid72484 00:22:43.376 Removing: /var/run/dpdk/spdk_pid72647 00:22:43.376 Removing: /var/run/dpdk/spdk_pid72734 00:22:43.636 Removing: /var/run/dpdk/spdk_pid72922 00:22:43.636 Removing: /var/run/dpdk/spdk_pid72996 00:22:43.636 Removing: /var/run/dpdk/spdk_pid73094 00:22:43.636 Removing: /var/run/dpdk/spdk_pid73455 00:22:43.636 Removing: /var/run/dpdk/spdk_pid73855 00:22:43.636 Removing: /var/run/dpdk/spdk_pid73856 00:22:43.636 Removing: /var/run/dpdk/spdk_pid73857 00:22:43.636 Removing: /var/run/dpdk/spdk_pid74118 00:22:43.636 Removing: /var/run/dpdk/spdk_pid74438 00:22:43.636 Removing: /var/run/dpdk/spdk_pid74440 00:22:43.636 Removing: /var/run/dpdk/spdk_pid74767 00:22:43.636 Removing: /var/run/dpdk/spdk_pid74781 00:22:43.636 Removing: /var/run/dpdk/spdk_pid74801 00:22:43.636 Removing: /var/run/dpdk/spdk_pid74826 00:22:43.636 Removing: /var/run/dpdk/spdk_pid74833 00:22:43.636 Removing: /var/run/dpdk/spdk_pid75184 00:22:43.636 Removing: /var/run/dpdk/spdk_pid75238 00:22:43.636 Removing: /var/run/dpdk/spdk_pid75558 00:22:43.636 Removing: /var/run/dpdk/spdk_pid75751 00:22:43.636 Removing: /var/run/dpdk/spdk_pid76183 00:22:43.636 Removing: /var/run/dpdk/spdk_pid76746 00:22:43.636 Removing: /var/run/dpdk/spdk_pid77626 00:22:43.636 Removing: /var/run/dpdk/spdk_pid78264 00:22:43.636 Removing: /var/run/dpdk/spdk_pid78266 00:22:43.636 Removing: /var/run/dpdk/spdk_pid80296 00:22:43.636 Removing: /var/run/dpdk/spdk_pid80349 00:22:43.636 Removing: /var/run/dpdk/spdk_pid80409 00:22:43.636 Removing: /var/run/dpdk/spdk_pid80476 00:22:43.636 Removing: /var/run/dpdk/spdk_pid80597 00:22:43.636 Removing: /var/run/dpdk/spdk_pid80653 00:22:43.636 Removing: /var/run/dpdk/spdk_pid80709 00:22:43.636 Removing: /var/run/dpdk/spdk_pid80758 00:22:43.636 Removing: /var/run/dpdk/spdk_pid81123 00:22:43.636 Removing: /var/run/dpdk/spdk_pid82349 00:22:43.636 Removing: /var/run/dpdk/spdk_pid82497 00:22:43.636 Removing: /var/run/dpdk/spdk_pid82732 00:22:43.636 Removing: /var/run/dpdk/spdk_pid83331 00:22:43.636 Removing: /var/run/dpdk/spdk_pid83491 00:22:43.636 Removing: /var/run/dpdk/spdk_pid83649 00:22:43.636 Removing: /var/run/dpdk/spdk_pid83748 00:22:43.636 Removing: /var/run/dpdk/spdk_pid83918 00:22:43.636 Removing: /var/run/dpdk/spdk_pid84031 00:22:43.636 Removing: /var/run/dpdk/spdk_pid84749 00:22:43.636 Removing: /var/run/dpdk/spdk_pid84784 00:22:43.636 Removing: /var/run/dpdk/spdk_pid84818 00:22:43.636 Removing: /var/run/dpdk/spdk_pid85069 00:22:43.636 Removing: /var/run/dpdk/spdk_pid85104 00:22:43.636 Removing: /var/run/dpdk/spdk_pid85138 00:22:43.636 Removing: /var/run/dpdk/spdk_pid85606 00:22:43.636 Removing: /var/run/dpdk/spdk_pid85620 00:22:43.636 Removing: /var/run/dpdk/spdk_pid85853 00:22:43.636 Removing: /var/run/dpdk/spdk_pid85986 00:22:43.636 Removing: /var/run/dpdk/spdk_pid85995 00:22:43.636 Clean 00:22:43.636 08:33:45 -- common/autotest_common.sh@1451 -- # return 0 00:22:43.636 08:33:45 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:22:43.636 08:33:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:43.636 08:33:45 -- common/autotest_common.sh@10 -- # set +x 00:22:43.895 08:33:45 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:22:43.895 08:33:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:43.895 08:33:45 -- common/autotest_common.sh@10 -- # set +x 00:22:43.895 08:33:45 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:43.895 08:33:45 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:43.895 08:33:45 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:43.895 08:33:45 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:22:43.895 08:33:45 -- spdk/autotest.sh@394 -- # hostname 00:22:43.895 08:33:45 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:44.154 geninfo: WARNING: invalid characters removed from testname! 00:23:06.095 08:34:06 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:08.630 08:34:10 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:11.165 08:34:12 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:13.699 08:34:15 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:16.286 08:34:17 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:18.190 08:34:19 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:20.723 08:34:22 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:20.723 08:34:22 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:23:20.723 08:34:22 -- common/autotest_common.sh@1691 -- $ lcov --version 00:23:20.723 08:34:22 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:23:20.982 08:34:22 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:23:20.982 08:34:22 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:23:20.982 08:34:22 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:23:20.982 08:34:22 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:23:20.982 08:34:22 -- scripts/common.sh@336 -- $ IFS=.-: 00:23:20.982 08:34:22 -- scripts/common.sh@336 -- $ read -ra ver1 00:23:20.982 08:34:22 -- scripts/common.sh@337 -- $ IFS=.-: 00:23:20.982 08:34:22 -- scripts/common.sh@337 -- $ read -ra ver2 00:23:20.982 08:34:22 -- scripts/common.sh@338 -- $ local 'op=<' 00:23:20.982 08:34:22 -- scripts/common.sh@340 -- $ ver1_l=2 00:23:20.982 08:34:22 -- scripts/common.sh@341 -- $ ver2_l=1 00:23:20.982 08:34:22 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:23:20.982 08:34:22 -- scripts/common.sh@344 -- $ case "$op" in 00:23:20.982 08:34:22 -- scripts/common.sh@345 -- $ : 1 00:23:20.982 08:34:22 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:23:20.982 08:34:22 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:20.982 08:34:22 -- scripts/common.sh@365 -- $ decimal 1 00:23:20.982 08:34:22 -- scripts/common.sh@353 -- $ local d=1 00:23:20.982 08:34:22 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:23:20.982 08:34:22 -- scripts/common.sh@355 -- $ echo 1 00:23:20.982 08:34:22 -- scripts/common.sh@365 -- $ ver1[v]=1 00:23:20.982 08:34:22 -- scripts/common.sh@366 -- $ decimal 2 00:23:20.982 08:34:22 -- scripts/common.sh@353 -- $ local d=2 00:23:20.982 08:34:22 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:23:20.982 08:34:22 -- scripts/common.sh@355 -- $ echo 2 00:23:20.982 08:34:22 -- scripts/common.sh@366 -- $ ver2[v]=2 00:23:20.982 08:34:22 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:23:20.982 08:34:22 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:23:20.982 08:34:22 -- scripts/common.sh@368 -- $ return 0 00:23:20.982 08:34:22 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:20.982 08:34:22 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:23:20.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.982 --rc genhtml_branch_coverage=1 00:23:20.982 --rc genhtml_function_coverage=1 00:23:20.982 --rc genhtml_legend=1 00:23:20.982 --rc geninfo_all_blocks=1 00:23:20.982 --rc geninfo_unexecuted_blocks=1 00:23:20.982 00:23:20.982 ' 00:23:20.982 08:34:22 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:23:20.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.982 --rc genhtml_branch_coverage=1 00:23:20.982 --rc genhtml_function_coverage=1 00:23:20.982 --rc genhtml_legend=1 00:23:20.982 --rc geninfo_all_blocks=1 00:23:20.982 --rc geninfo_unexecuted_blocks=1 00:23:20.982 00:23:20.982 ' 00:23:20.982 08:34:22 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:23:20.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.982 --rc genhtml_branch_coverage=1 00:23:20.982 --rc genhtml_function_coverage=1 00:23:20.982 --rc genhtml_legend=1 00:23:20.982 --rc geninfo_all_blocks=1 00:23:20.982 --rc geninfo_unexecuted_blocks=1 00:23:20.982 00:23:20.982 ' 00:23:20.982 08:34:22 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:23:20.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.982 --rc genhtml_branch_coverage=1 00:23:20.982 --rc genhtml_function_coverage=1 00:23:20.982 --rc genhtml_legend=1 00:23:20.982 --rc geninfo_all_blocks=1 00:23:20.982 --rc geninfo_unexecuted_blocks=1 00:23:20.982 00:23:20.982 ' 00:23:20.982 08:34:22 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:20.982 08:34:22 -- scripts/common.sh@15 -- $ shopt -s extglob 00:23:20.982 08:34:22 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:23:20.982 08:34:22 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:20.982 08:34:22 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:20.982 08:34:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.982 08:34:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.982 08:34:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.982 08:34:22 -- paths/export.sh@5 -- $ export PATH 00:23:20.982 08:34:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.982 08:34:22 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:23:20.982 08:34:22 -- common/autobuild_common.sh@486 -- $ date +%s 00:23:20.982 08:34:22 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728981262.XXXXXX 00:23:20.982 08:34:22 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728981262.xywHQ1 00:23:20.982 08:34:22 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:23:20.982 08:34:22 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:23:20.982 08:34:22 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:23:20.982 08:34:22 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:23:20.982 08:34:22 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:23:20.982 08:34:22 -- common/autobuild_common.sh@502 -- $ get_config_params 00:23:20.982 08:34:22 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:23:20.982 08:34:22 -- common/autotest_common.sh@10 -- $ set +x 00:23:20.982 08:34:22 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:23:20.982 08:34:22 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:23:20.982 08:34:22 -- pm/common@17 -- $ local monitor 00:23:20.982 08:34:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:20.982 08:34:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:20.982 08:34:22 -- pm/common@25 -- $ sleep 1 00:23:20.982 08:34:22 -- pm/common@21 -- $ date +%s 00:23:20.982 08:34:22 -- pm/common@21 -- $ date +%s 00:23:20.982 08:34:22 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728981262 00:23:20.982 08:34:22 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728981262 00:23:20.982 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728981262_collect-cpu-load.pm.log 00:23:20.982 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728981262_collect-vmstat.pm.log 00:23:21.919 08:34:23 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:23:21.919 08:34:23 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:23:21.919 08:34:23 -- spdk/autopackage.sh@14 -- $ timing_finish 00:23:21.919 08:34:23 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:21.919 08:34:23 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:23:21.919 08:34:23 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:21.919 08:34:23 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:23:21.919 08:34:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:23:21.919 08:34:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:23:21.919 08:34:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:21.919 08:34:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:23:21.919 08:34:23 -- pm/common@44 -- $ pid=87750 00:23:21.919 08:34:23 -- pm/common@50 -- $ kill -TERM 87750 00:23:21.919 08:34:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:21.919 08:34:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:23:22.178 08:34:23 -- pm/common@44 -- $ pid=87751 00:23:22.178 08:34:23 -- pm/common@50 -- $ kill -TERM 87751 00:23:22.178 + [[ -n 5207 ]] 00:23:22.178 + sudo kill 5207 00:23:22.188 [Pipeline] } 00:23:22.206 [Pipeline] // timeout 00:23:22.212 [Pipeline] } 00:23:22.226 [Pipeline] // stage 00:23:22.232 [Pipeline] } 00:23:22.246 [Pipeline] // catchError 00:23:22.256 [Pipeline] stage 00:23:22.258 [Pipeline] { (Stop VM) 00:23:22.270 [Pipeline] sh 00:23:22.550 + vagrant halt 00:23:25.842 ==> default: Halting domain... 00:23:31.183 [Pipeline] sh 00:23:31.461 + vagrant destroy -f 00:23:34.745 ==> default: Removing domain... 00:23:34.757 [Pipeline] sh 00:23:35.037 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_3/output 00:23:35.046 [Pipeline] } 00:23:35.061 [Pipeline] // stage 00:23:35.067 [Pipeline] } 00:23:35.081 [Pipeline] // dir 00:23:35.086 [Pipeline] } 00:23:35.101 [Pipeline] // wrap 00:23:35.107 [Pipeline] } 00:23:35.120 [Pipeline] // catchError 00:23:35.131 [Pipeline] stage 00:23:35.133 [Pipeline] { (Epilogue) 00:23:35.146 [Pipeline] sh 00:23:35.427 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:40.737 [Pipeline] catchError 00:23:40.739 [Pipeline] { 00:23:40.752 [Pipeline] sh 00:23:41.033 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:41.033 Artifacts sizes are good 00:23:41.043 [Pipeline] } 00:23:41.057 [Pipeline] // catchError 00:23:41.068 [Pipeline] archiveArtifacts 00:23:41.075 Archiving artifacts 00:23:41.236 [Pipeline] cleanWs 00:23:41.250 [WS-CLEANUP] Deleting project workspace... 00:23:41.250 [WS-CLEANUP] Deferred wipeout is used... 00:23:41.272 [WS-CLEANUP] done 00:23:41.274 [Pipeline] } 00:23:41.290 [Pipeline] // stage 00:23:41.295 [Pipeline] } 00:23:41.309 [Pipeline] // node 00:23:41.314 [Pipeline] End of Pipeline 00:23:41.353 Finished: SUCCESS